Sample records for low-level auditory cortex

  1. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source. Copyright © 2018 the authors 0270-6474/18/383252-13$15.00/0.

  2. Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    ERIC Educational Resources Information Center

    Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean

    2015-01-01

    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…

  3. Salicylate-induced cochlear impairments, cortical hyperactivity and re-tuning, and tinnitus.

    PubMed

    Chen, Guang-Di; Stolzberg, Daniel; Lobarinas, Edward; Sun, Wei; Ding, Dalian; Salvi, Richard

    2013-01-01

    High doses of sodium salicylate (SS) have long been known to induce temporary hearing loss and tinnitus, effects attributed to cochlear dysfunction. However, our recent publications reviewed here show that SS can induce profound, permanent, and unexpected changes in the cochlea and central nervous system. Prolonged treatment with SS permanently decreased the cochlear compound action potential (CAP) amplitude in vivo. In vitro, high dose SS resulted in a permanent loss of spiral ganglion neurons and nerve fibers, but did not damage hair cells. Acute treatment with high-dose SS produced a frequency-dependent decrease in the amplitude of distortion product otoacoustic emissions and CAP. Losses were greatest at low and high frequencies, but least at the mid-frequencies (10-20 kHz), the mid-frequency band that corresponds to the tinnitus pitch measured behaviorally. In the auditory cortex, medial geniculate body and amygdala, high-dose SS enhanced sound-evoked neural responses at high stimulus levels, but it suppressed activity at low intensities and elevated response threshold. When SS was applied directly to the auditory cortex or amygdala, it only enhanced sound evoked activity, but did not elevate response threshold. Current source density analysis revealed enhanced current flow into the supragranular layer of auditory cortex following systemic SS treatment. Systemic SS treatment also altered tuning in auditory cortex and amygdala; low frequency and high frequency multiunit clusters up-shifted or down-shifted their characteristic frequency into the 10-20 kHz range thereby altering auditory cortex tonotopy and enhancing neural activity at mid-frequencies corresponding to the tinnitus pitch. These results suggest that SS-induced hyperactivity in auditory cortex originates in the central nervous system, that the amygdala potentiates these effects and that the SS-induced tonotopic shifts in auditory cortex, the putative neural correlate of tinnitus, arises from the interaction between the frequency-dependent losses in the cochlea and hyperactivity in the central nervous system. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners.

    PubMed

    Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim

    2015-06-15

    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Frontal Top-Down Signals Increase Coupling of Auditory Low-Frequency Oscillations to Continuous Speech in Human Listeners

    PubMed Central

    Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim

    2015-01-01

    Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433

  6. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE PAGES

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.; ...

    2017-06-30

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  7. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  8. Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.

    PubMed

    Berezutskaya, Julia; Freudenburg, Zachary V; Güçlü, Umut; van Gerven, Marcel A J; Ramsey, Nick F

    2017-08-16

    Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain. SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension. Copyright © 2017 the authors 0270-6474/17/377906-15$15.00/0.

  9. Neurons and Objects: The Case of Auditory Cortex

    PubMed Central

    Nelken, Israel; Bar-Yosef, Omer

    2008-01-01

    Sounds are encoded into electrical activity in the inner ear, where they are represented (roughly) as patterns of energy in narrow frequency bands. However, sounds are perceived in terms of their high-order properties. It is generally believed that this transformation is performed along the auditory hierarchy, with low-level physical cues computed at early stages of the auditory system and high-level abstract qualities at high-order cortical areas. The functional position of primary auditory cortex (A1) in this scheme is unclear – is it ‘early’, encoding physical cues, or is it ‘late’, already encoding abstract qualities? Here we argue that neurons in cat A1 show sensitivity to high-level features of sounds. In particular, these neurons may already show sensitivity to ‘auditory objects’. The evidence for this claim comes from studies in which individual sounds are presented singly and in mixtures. Many neurons in cat A1 respond to mixtures in the same way they respond to one of the individual components of the mixture, and in many cases neurons may respond to a low-level component of the mixture rather than to the acoustically dominant one, even though the same neurons respond to the acoustically-dominant component when presented alone. PMID:18982113

  10. Contrast Gain Control in Auditory Cortex

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D.B.; Schnupp, Jan W.H.; King, Andrew J.

    2011-01-01

    Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds. PMID:21689603

  11. VGLUT1 and VGLUT2 mRNA expression in the primate auditory pathway

    PubMed Central

    Hackett, Troy A.; Takahata, Toru; Balaram, Pooja

    2011-01-01

    The vesicular glutamate transporters (VGLUTs) regulate storage and release of glutamate in the brain. In adult animals, the VGLUT1 and VGLUT2 isoforms are widely expressed and differentially distributed, suggesting that neural circuits exhibit distinct modes of glutamate regulation. Studies in rodents suggest that VGLUT1 and VGLUT2 mRNA expression patterns are partly complementary, with VGLUT1 expressed at higher levels in cortex and VGLUT2 prominent subcortically, but with overlapping distributions in some nuclei. In primates, VGLUT gene expression has not been previously studied in any part of the brain. The purposes of the present study were to document the regional expression of VGLUT1 and VGLUT2 mRNA in the auditory pathway through A1 in cortex, and to determine whether their distributions are comparable to rodents. In situ hybridization with antisense riboprobes revealed that VGLUT2 was strongly expressed by neurons in the cerebellum and most major auditory nuclei, including the dorsal and ventral cochlear nuclei, medial and lateral superior olivary nuclei, central nucleus of the inferior colliculus, sagulum, and all divisions of the medial geniculate. VGLUT1 was densely expressed in the hippocampus and ventral cochlear nuclei, and at reduced levels in other auditory nuclei. In auditory cortex, neurons expressing VGLUT1 were widely distributed in layers II – VI of the core, belt and parabelt regions. VGLUT2 was most strongly expressed by neurons in layers IIIb and IV, weakly by neurons in layers II – IIIa, and at very low levels in layers V – VI. The findings indicate that VGLUT2 is strongly expressed by neurons at all levels of the subcortical auditory pathway, and by neurons in the middle layers of cortex, whereas VGLUT1 is strongly expressed by most if not all glutamatergic neurons in auditory cortex and at variable levels among auditory subcortical nuclei. These patterns imply that VGLUT2 is the main vesicular glutamate transporter in subcortical and thalamocortical (TC) circuits, whereas VGLUT1 is dominant in cortico-cortical (CC) and cortico-thalamic (CT) systems of projections. The results also suggest that VGLUT mRNA expression patterns in primates are similar to rodents, and establishes a baseline for detailed studies of these transporters in selected circuits of the auditory system. PMID:21111036

  12. VGLUT1 and VGLUT2 mRNA expression in the primate auditory pathway.

    PubMed

    Hackett, Troy A; Takahata, Toru; Balaram, Pooja

    2011-04-01

    The vesicular glutamate transporters (VGLUTs) regulate the storage and release of glutamate in the brain. In adult animals, the VGLUT1 and VGLUT2 isoforms are widely expressed and differentially distributed, suggesting that neural circuits exhibit distinct modes of glutamate regulation. Studies in rodents suggest that VGLUT1 and VGLUT2 mRNA expression patterns are partly complementary, with VGLUT1 expressed at higher levels in the cortex and VGLUT2 prominent subcortically, but with overlapping distributions in some nuclei. In primates, VGLUT gene expression has not been previously studied in any part of the brain. The purposes of the present study were to document the regional expression of VGLUT1 and VGLUT2 mRNA in the auditory pathway through A1 in the cortex, and to determine whether their distributions are comparable to rodents. In situ hybridization with antisense riboprobes revealed that VGLUT2 was strongly expressed by neurons in the cerebellum and most major auditory nuclei, including the dorsal and ventral cochlear nuclei, medial and lateral superior olivary nuclei, central nucleus of the inferior colliculus, sagulum, and all divisions of the medial geniculate. VGLUT1 was densely expressed in the hippocampus and ventral cochlear nuclei, and at reduced levels in other auditory nuclei. In the auditory cortex, neurons expressing VGLUT1 were widely distributed in layers II-VI of the core, belt and parabelt regions. VGLUT2 was expressed most strongly by neurons in layers IIIb and IV, weakly by neurons in layers II-IIIa, and at very low levels in layers V-VI. The findings indicate that VGLUT2 is strongly expressed by neurons at all levels of the subcortical auditory pathway, and by neurons in the middle layers of the cortex, whereas VGLUT1 is strongly expressed by most if not all glutamatergic neurons in the auditory cortex and at variable levels among auditory subcortical nuclei. These patterns imply that VGLUT2 is the main vesicular glutamate transporter in subcortical and thalamocortical (TC) circuits, whereas VGLUT1 is dominant in corticocortical (CC) and corticothalamic (CT) systems of projections. The results also suggest that VGLUT mRNA expression patterns in primates are similar to rodents, and establish a baseline for detailed studies of these transporters in selected circuits of the auditory system. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Tinnitus Intensity Dependent Gamma Oscillations of the Contralateral Auditory Cortex

    PubMed Central

    van der Loo, Elsa; Gais, Steffen; Congedo, Marco; Vanneste, Sven; Plazier, Mark; Menovsky, Tomas; Van de Heyning, Paul; De Ridder, Dirk

    2009-01-01

    Background Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. Methods and Findings In unilateral tinnitus patients (N = 15; 10 right, 5 left) source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05). Conclusion Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception. PMID:19816597

  14. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  15. Functional Topography of Human Auditory Cortex

    PubMed Central

    Rauschecker, Josef P.

    2016-01-01

    Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior–posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or “periodotopy,” are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale “periodotopic” organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds. PMID:26818527

  16. Contextual modulation of primary visual cortex by auditory signals.

    PubMed

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  17. Contextual modulation of primary visual cortex by auditory signals

    PubMed Central

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  18. Speech training alters consonant and vowel responses in multiple auditory cortex fields

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927

  19. Single electrode micro-stimulation of rat auditory cortex: an evaluation of behavioral performance.

    PubMed

    Rousche, Patrick J; Otto, Kevin J; Reilly, Mark P; Kipke, Daryl R

    2003-05-01

    A combination of electrophysiological mapping, behavioral analysis and cortical micro-stimulation was used to explore the interrelation between the auditory cortex and behavior in the adult rat. Auditory discriminations were evaluated in eight rats trained to discriminate the presence or absence of a 75 dB pure tone stimulus. A probe trial technique was used to obtain intensity generalization gradients that described response probabilities to mid-level tones between 0 and 75 dB. The same rats were then chronically implanted in the auditory cortex with a 16 or 32 channel tungsten microwire electrode array. Implanted animals were then trained to discriminate the presence of single electrode micro-stimulation of magnitude 90 microA (22.5 nC/phase). Intensity generalization gradients were created to obtain the response probabilities to mid-level current magnitudes ranging from 0 to 90 microA on 36 different electrodes in six of the eight rats. The 50% point (the current level resulting in 50% detections) varied from 16.7 to 69.2 microA, with an overall mean of 42.4 (+/-8.1) microA across all single electrodes. Cortical micro-stimulation induced sensory-evoked behavior with similar characteristics as normal auditory stimuli. The results highlight the importance of the auditory cortex in a discrimination task and suggest that micro-stimulation of the auditory cortex might be an effective means for a graded information transfer of auditory information directly to the brain as part of a cortical auditory prosthesis.

  20. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    PubMed Central

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  1. A Novel Functional Magnetic Resonance Imaging Paradigm for the Preoperative Assessment of Auditory Perception in a Musician Undergoing Temporal Lobe Surgery.

    PubMed

    Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J

    2018-03-01

    Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A Circuit for Motor Cortical Modulation of Auditory Cortical Activity

    PubMed Central

    Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan

    2013-01-01

    Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287

  3. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  4. Information flow in the auditory cortical network

    PubMed Central

    Hackett, Troy A.

    2011-01-01

    Auditory processing in the cerebral cortex is comprised of an interconnected network of auditory and auditory-related areas distributed throughout the forebrain. The nexus of auditory activity is located in temporal cortex among several specialized areas, or fields, that receive dense inputs from the medial geniculate complex. These areas are collectively referred to as auditory cortex. Auditory activity is extended beyond auditory cortex via connections with auditory-related areas elsewhere in the cortex. Within this network, information flows between areas to and from countless targets, but in a manner that is characterized by orderly regional, areal and laminar patterns. These patterns reflect some of the structural constraints that passively govern the flow of information at all levels of the network. In addition, the exchange of information within these circuits is dynamically regulated by intrinsic neurochemical properties of projecting neurons and their targets. This article begins with an overview of the principal circuits and how each is related to information flow along major axes of the network. The discussion then turns to a description of neurochemical gradients along these axes, highlighting recent work on glutamate transporters in the thalamocortical projections to auditory cortex. The article concludes with a brief discussion of relevant neurophysiological findings as they relate to structural gradients in the network. PMID:20116421

  5. Characterization of the blood-oxygen level-dependent (BOLD) response in cat auditory cortex using high-field fMRI.

    PubMed

    Brown, Trecia A; Joanisse, Marc F; Gati, Joseph S; Hughes, Sarah M; Nixon, Pam L; Menon, Ravi S; Lomber, Stephen G

    2013-01-01

    Much of what is known about the cortical organization for audition in humans draws from studies of auditory cortex in the cat. However, these data build largely on electrophysiological recordings that are both highly invasive and provide less evidence concerning macroscopic patterns of brain activation. Optical imaging, using intrinsic signals or dyes, allows visualization of surface-based activity but is also quite invasive. Functional magnetic resonance imaging (fMRI) overcomes these limitations by providing a large-scale perspective of distributed activity across the brain in a non-invasive manner. The present study used fMRI to characterize stimulus-evoked activity in auditory cortex of an anesthetized (ketamine/isoflurane) cat, focusing specifically on the blood-oxygen-level-dependent (BOLD) signal time course. Functional images were acquired for adult cats in a 7 T MRI scanner. To determine the BOLD signal time course, we presented 1s broadband noise bursts between widely spaced scan acquisitions at randomized delays (1-12 s in 1s increments) prior to each scan. Baseline trials in which no stimulus was presented were also acquired. Our results indicate that the BOLD response peaks at about 3.5s in primary auditory cortex (AI) and at about 4.5 s in non-primary areas (AII, PAF) of cat auditory cortex. The observed peak latency is within the range reported for humans and non-human primates (3-4 s). The time course of hemodynamic activity in cat auditory cortex also occurs on a comparatively shorter scale than in cat visual cortex. The results of this study will provide a foundation for future auditory fMRI studies in the cat to incorporate these hemodynamic response properties into appropriate analyses of cat auditory cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Impaired downregulation of visual cortex during auditory processing is associated with autism symptomatology in children and adolescents with autism spectrum disorder.

    PubMed

    Jao Keehn, R Joanne; Sanchez, Sandra S; Stewart, Claire R; Zhao, Weiqi; Grenesko-Stevens, Emily L; Keehn, Brandon; Müller, Ralph-Axel

    2017-01-01

    Autism spectrum disorders (ASD) are pervasive developmental disorders characterized by impairments in language development and social interaction, along with restricted and stereotyped behaviors. These behaviors often include atypical responses to sensory stimuli; some children with ASD are easily overwhelmed by sensory stimuli, while others may seem unaware of their environment. Vision and audition are two sensory modalities important for social interactions and language, and are differentially affected in ASD. In the present study, 16 children and adolescents with ASD and 16 typically developing (TD) participants matched for age, gender, nonverbal IQ, and handedness were tested using a mixed event-related/blocked functional magnetic resonance imaging paradigm to examine basic perceptual processes that may form the foundation for later-developing cognitive abilities. Auditory (high or low pitch) and visual conditions (dot located high or low in the display) were presented, and participants indicated whether the stimuli were "high" or "low." Results for the auditory condition showed downregulated activity of the visual cortex in the TD group, but upregulation in the ASD group. This atypical activity in visual cortex was associated with autism symptomatology. These findings suggest atypical crossmodal (auditory-visual) modulation linked to sociocommunicative deficits in ASD, in agreement with the general hypothesis of low-level sensorimotor impairments affecting core symptomatology. Autism Res 2017, 10: 130-143. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  7. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  8. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  9. The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam

    2016-05-01

    The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.

  10. The Representation of Prediction Error in Auditory Cortex

    PubMed Central

    Rubin, Jonathan; Ulanovsky, Nachum; Tishby, Naftali

    2016-01-01

    To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251

  11. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex

    PubMed Central

    Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie

    2013-01-01

    Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225

  12. The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex

    PubMed Central

    Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J

    2014-01-01

    Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. PMID:24945075

  13. Thalamic and cortical pathways supporting auditory processing

    PubMed Central

    Lee, Charles C.

    2012-01-01

    The neural processing of auditory information engages pathways that begin initially at the cochlea and that eventually reach forebrain structures. At these higher levels, the computations necessary for extracting auditory source and identity information rely on the neuroanatomical connections between the thalamus and cortex. Here, the general organization of these connections in the medial geniculate body (thalamus) and the auditory cortex is reviewed. In addition, we consider two models organizing the thalamocortical pathways of the non-tonotopic and multimodal auditory nuclei. Overall, the transfer of information to the cortex via the thalamocortical pathways is complemented by the numerous intracortical and corticocortical pathways. Although interrelated, the convergent interactions among thalamocortical, corticocortical, and commissural pathways enable the computations necessary for the emergence of higher auditory perception. PMID:22728130

  14. The Effect of Spatial Smoothing on Representational Similarity in a Simple Motor Paradigm

    PubMed Central

    Hendriks, Michelle H. A.; Daniels, Nicky; Pegado, Felipe; Op de Beeck, Hans P.

    2017-01-01

    Multi-voxel pattern analyses (MVPA) are often performed on unsmoothed data, which is very different from the general practice of large smoothing extents in standard voxel-based analyses. In this report, we studied the effect of smoothing on MVPA results in a motor paradigm. Subjects pressed four buttons with two different fingers of the two hands in response to auditory commands. Overall, independent of the degree of smoothing, correlational MVPA showed distinctive patterns for the different hands in all studied regions of interest (motor cortex, prefrontal cortex, and auditory cortices). With regard to the effect of smoothing, our findings suggest that results from correlational MVPA show a minor sensitivity to smoothing. Moderate amounts of smoothing (in this case, 1−4 times the voxel size) improved MVPA correlations, from a slight improvement to large improvements depending on the region involved. None of the regions showed signs of a detrimental effect of moderate levels of smoothing. Even higher amounts of smoothing sometimes had a positive effect, most clearly in low-level auditory cortex. We conclude that smoothing seems to have a minor positive effect on MVPA results, thus researchers should be mindful about the choices they make regarding the level of smoothing. PMID:28611726

  15. The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex.

    PubMed

    Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J

    2014-09-01

    Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. © 2014 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. A possible role for a paralemniscal auditory pathway in the coding of slow temporal information

    PubMed Central

    Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina

    2010-01-01

    Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680

  17. Representation of Dynamic Interaural Phase Difference in Auditory Cortex of Awake Rhesus Macaques

    PubMed Central

    Scott, Brian H.; Malone, Brian J.; Semple, Malcolm N.

    2009-01-01

    Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level. PMID:19164111

  18. Representation of dynamic interaural phase difference in auditory cortex of awake rhesus macaques.

    PubMed

    Scott, Brian H; Malone, Brian J; Semple, Malcolm N

    2009-04-01

    Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level.

  19. Representations of Spectral Differences between Vowels in Tonotopic Regions of Auditory Cortex

    ERIC Educational Resources Information Center

    Fisher, Julia

    2017-01-01

    This work examines the link between low-level cortical acoustic processing and higher-level cortical phonemic processing. Specifically, using functional magnetic resonance imaging, it looks at 1) whether or not the vowels [alpha] and [i] are distinguishable in regions of interest defined by the first two resonant frequencies (formants) of those…

  20. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    PubMed Central

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  1. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    PubMed

    Wengenroth, Martina; Blatow, Maria; Bendszus, Martin; Schneider, Peter

    2010-08-23

    Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  2. Contrast Enhancement without Transient Map Expansion for Species-Specific Vocalizations in Core Auditory Cortex during Learning.

    PubMed

    Shepard, Kathryn N; Chong, Kelly K; Liu, Robert C

    2016-01-01

    Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups' postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning.

  3. Age-related decrease in the mitochondrial sirtuin deacetylase Sirt3 expression associated with ROS accumulation in the auditory cortex of the mimetic aging rat model.

    PubMed

    Zeng, Lingling; Yang, Yang; Hu, Yujuan; Sun, Yu; Du, Zhengde; Xie, Zhen; Zhou, Tao; Kong, Weijia

    2014-01-01

    Age-related dysfunction of the central auditory system, also known as central presbycusis, can affect speech perception and sound localization. Understanding the pathogenesis of central presbycusis will help to develop novel approaches to prevent or treat this disease. In this study, the mechanisms of central presbycusis were investigated using a mimetic aging rat model induced by chronic injection of D-galactose (D-Gal). We showed that malondialdehyde (MDA) levels were increased and manganese superoxide dismutase (SOD2) activity was reduced in the auditory cortex in natural aging and D-Gal-induced mimetic aging rats. Furthermore, mitochondrial DNA (mtDNA) 4834 bp deletion, abnormal ultrastructure and cell apoptosis in the auditory cortex were also found in natural aging and D-Gal mimetic aging rats. Sirt3, a mitochondrial NAD+-dependent deacetylase, has been shown to play a crucial role in controlling cellular reactive oxygen species (ROS) homeostasis. However, the role of Sirt3 in the pathogenesis of age-related central auditory cortex deterioration is still unclear. Here, we showed that decreased Sirt3 expression might be associated with increased SOD2 acetylation, which negatively regulates SOD2 activity. Oxidative stress accumulation was likely the result of low SOD2 activity and a decline in ROS clearance. Our findings indicate that Sirt3 might play an essential role, via the mediation of SOD2, in central presbycusis and that manipulation of Sirt3 expression might provide a new approach to combat aging and oxidative stress-related diseases.

  4. Optimal resource allocation for novelty detection in a human auditory memory.

    PubMed

    Sinkkonen, J; Kaski, S; Huotilainen, M; Ilmoniemi, R J; Näätänen, R; Kaila, K

    1996-11-04

    A theory of resource allocation for neuronal low-level filtering is presented, based on an analysis of optimal resource allocation in simple environments. A quantitative prediction of the theory was verified in measurements of the magnetic mismatch response (MMR), an auditory event-related magnetic response of the human brain. The amplitude of the MMR was found to be directly proportional to the information conveyed by the stimulus. To the extent that the amplitude of the MMR can be used to measure resource usage by the auditory cortex, this finding supports our theory that, at least for early auditory processing, energy resources are used in proportion to the information content of incoming stimulus flow.

  5. Visual activity predicts auditory recovery from deafness after adult cochlear implantation.

    PubMed

    Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2013-12-01

    Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.

  6. How do auditory cortex neurons represent communication sounds?

    PubMed

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Age-related decline of the cytochrome c oxidase subunit expression in the auditory cortex of the mimetic aging rat model associated with the common deletion.

    PubMed

    Zhong, Yi; Hu, Yujuan; Peng, Wei; Sun, Yu; Yang, Yang; Zhao, Xueyan; Huang, Xiang; Zhang, Honglian; Kong, Weijia

    2012-12-01

    The age-related deterioration in the central auditory system is well known to impair the abilities of sound localization and speech perception. However, the mechanisms involved in the age-related central auditory deficiency remain unclear. Previous studies have demonstrated that mitochondrial DNA (mtDNA) deletions accumulated with age in the auditory system. Also, a cytochrome c oxidase (CcO) deficiency has been proposed to be a causal factor in the age-related decline in mitochondrial respiratory activity. This study was designed to explore the changes of CcO activity and to investigate the possible relationship between the mtDNA common deletion (CD) and CcO activity as well as the mRNA expression of CcO subunits in the auditory cortex of D-galactose (D-gal)-induced mimetic aging rats at different ages. Moreover, we explored whether peroxisome proliferator-activated receptor-γ coactivator 1α (PGC-1α), nuclear respiratory factor 1 (NRF-1) and mitochondrial transcription factor A (TFAM) were involved in the changes of nuclear- and mitochondrial-encoded CcO subunits in the auditory cortex during aging. Our data demonstrated that d-gal-induced mimetic aging rats exhibited an accelerated accumulation of the CD and a gradual decline in the CcO activity in the auditory cortex during the aging process. The reduction in the CcO activity was correlated with the level of CD load in the auditory cortex. The mRNA expression of CcO subunit III was reduced significantly with age in the d-gal-induced mimetic aging rats. In contrast, the decline in the mRNA expression of subunits I and IV was relatively minor. Additionally, significant increases in the mRNA and protein levels of PGC-1α, NRF-1 and TFAM were observed in the auditory cortex of D-gal-induced mimetic aging rats with aging. These findings suggested that the accelerated accumulation of the CD in the auditory cortex may induce a substantial decline in CcO subunit III and lead to a significant decline in the CcO activity progressively with age despite compensatory increases of PGC-1α, NRF-1 and TFAM. Therefore, CcO may be a specific intramitochondrial site of age-related deterioration in the auditory cortex, and CcO subunit III might be a target in the development of presbycusis. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Unraveling the principles of auditory cortical processing: can we learn from the visual system?

    PubMed Central

    King, Andrew J; Nelken, Israel

    2013-01-01

    Studies of auditory cortex are often driven by the assumption, derived from our better understanding of visual cortex, that basic physical properties of sounds are represented there before being used by higher-level areas for determining sound-source identity and location. However, we only have a limited appreciation of what the cortex adds to the extensive subcortical processing of auditory information, which can account for many perceptual abilities. This is partly because of the approaches that have dominated the study of auditory cortical processing to date, and future progress will unquestionably profit from the adoption of methods that have provided valuable insights into the neural basis of visual perception. At the same time, we propose that there are unique operating principles employed by the auditory cortex that relate largely to the simultaneous and sequential processing of previously derived features and that therefore need to be studied and understood in their own right. PMID:19471268

  9. Leftward Lateralization of Auditory Cortex Underlies Holistic Sound Perception in Williams Syndrome

    PubMed Central

    Bendszus, Martin; Schneider, Peter

    2010-01-01

    Background Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Methodology/Principal Findings Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. Conclusions/Significance There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties. PMID:20808792

  10. Topographic Distribution of Stimulus-Specific Adaptation across Auditory Cortical Fields in the Anesthetized Rat

    PubMed Central

    Nieto-Diego, Javier; Malmierca, Manuel S.

    2016-01-01

    Stimulus-specific adaptation (SSA) in single neurons of the auditory cortex was suggested to be a potential neural correlate of the mismatch negativity (MMN), a widely studied component of the auditory event-related potentials (ERP) that is elicited by changes in the auditory environment. However, several aspects on this SSA/MMN relation remain unresolved. SSA occurs in the primary auditory cortex (A1), but detailed studies on SSA beyond A1 are lacking. To study the topographic organization of SSA, we mapped the whole rat auditory cortex with multiunit activity recordings, using an oddball paradigm. We demonstrate that SSA occurs outside A1 and differs between primary and nonprimary cortical fields. In particular, SSA is much stronger and develops faster in the nonprimary than in the primary fields, paralleling the organization of subcortical SSA. Importantly, strong SSA is present in the nonprimary auditory cortex within the latency range of the MMN in the rat and correlates with an MMN-like difference wave in the simultaneously recorded local field potentials (LFP). We present new and strong evidence linking SSA at the cellular level to the MMN, a central tool in cognitive and clinical neuroscience. PMID:26950883

  11. Touch activates human auditory cortex.

    PubMed

    Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta

    2006-05-01

    Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.

  12. Broadened population-level frequency tuning in the auditory cortex of tinnitus patients.

    PubMed

    Sekiya, Kenichi; Takahashi, Mariko; Murakami, Shingo; Kakigi, Ryusuke; Okamoto, Hidehiko

    2017-03-01

    Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus. NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. Copyright © 2017 the American Physiological Society.

  13. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822

  14. Diazepam reduces excitability of amygdala and further influences auditory cortex following sodium salicylate treatment in rats.

    PubMed

    Song, Yu; Liu, Junxiu; Ma, Furong; Mao, Lanqun

    2016-12-01

    Diazepam can reduce the excitability of lateral amygdala and eventually suppress the excitability of the auditory cortex in rats following salicylate treatment, indicating the regulating effect of lateral amygdala to the auditory cortex in the tinnitus procedure. To study the spontaneous firing rates (SFR) of the auditory cortex and lateral amygdala regulated by diazepam in the tinnitus rat model induced by sodium salicylate. This study first created a tinnitus rat modal induced by sodium salicylate, and recorded SFR of both auditory cortex and lateral amygdala. Then diazepam was intraperitoneally injected and the SFR changes of lateral amygdala recorded. Finally, diazepam was microinjected on lateral amygdala and the SFR changes of the auditory cortex recorded. Both SFRs of the auditory cortex and lateral amygdala increased after salicylate treatment. SFR of lateral amygdala decreased after intraperitoneal injection of diazepam. Microinjecting diazepam to lateral amygdala decreased SFR of the auditory cortex ipsilaterally and contralaterally.

  15. Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging

    PubMed Central

    Bordier, Cecile; Puja, Francesco; Macaluso, Emiliano

    2013-01-01

    The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified “sensory” networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom–up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches. PMID:23202431

  16. Click train encoding in primary and non-primary auditory cortex of anesthetized macaque monkeys.

    PubMed

    Oshurkova, E; Scheich, H; Brosch, M

    2008-06-02

    We studied encoding of temporally modulated sounds in 28 multiunits in the primary auditory cortical field (AI) and in 35 multiunits in the secondary auditory cortical field (caudomedial auditory cortical field, CM) by presenting periodic click trains with click rates between 1 and 300 Hz lasting for 2-4 s. We found that all multiunits increased or decreased their firing rate during the steady state portion of the click train and that all except two multiunits synchronized their firing to individual clicks in the train. Rate increases and synchronized responses were most prevalent and strongest at low click rates, as expressed by best modulation frequency, limiting frequency, percentage of responsive multiunits, and average rate response and vector strength. Synchronized responses occurred up to 100 Hz; rate response occurred up to 300 Hz. Both auditory fields responded similarly to low click rates but differed at click rates above approximately 12 Hz at which more multiunits in AI than in CM exhibited synchronized responses and increased rate responses and more multiunits in CM exhibited decreased rate responses. These findings suggest that the auditory cortex of macaque monkeys encodes temporally modulated sounds similar to the auditory cortex of other mammals. Together with other observations presented in this and other reports, our findings also suggest that AI and CM have largely overlapping sensitivities for acoustic stimulus features but encode these features differently.

  17. The Neural Substrate for Binaural Masking Level Differences in the Auditory Cortex

    PubMed Central

    Gilbert, Heather J.; Krumbholz, Katrin; Palmer, Alan R.

    2015-01-01

    The binaural masking level difference (BMLD) is a phenomenon whereby a signal that is identical at each ear (S0), masked by a noise that is identical at each ear (N0), can be made 12–15 dB more detectable by inverting the waveform of either the tone or noise at one ear (Sπ, Nπ). Single-cell responses to BMLD stimuli were measured in the primary auditory cortex of urethane-anesthetized guinea pigs. Firing rate was measured as a function of signal level of a 500 Hz pure tone masked by low-passed white noise. Responses were similar to those reported in the inferior colliculus. At low signal levels, the response was dominated by the masker. At higher signal levels, firing rate either increased or decreased. Detection thresholds for each neuron were determined using signal detection theory. Few neurons yielded measurable detection thresholds for all stimulus conditions, with a wide range in thresholds. However, across the entire population, the lowest thresholds were consistent with human psychophysical BMLDs. As in the inferior colliculus, the shape of the firing-rate versus signal-level functions depended on the neurons' selectivity for interaural time difference. Our results suggest that, in cortex, BMLD signals are detected from increases or decreases in the firing rate, consistent with predictions of cross-correlation models of binaural processing and that the psychophysical detection threshold is based on the lowest neural thresholds across the population. PMID:25568115

  18. Contrast Enhancement without Transient Map Expansion for Species-Specific Vocalizations in Core Auditory Cortex during Learning

    PubMed Central

    Shepard, Kathryn N.; Chong, Kelly K.

    2016-01-01

    Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups’ postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning. PMID:27957529

  19. Auditory mismatch impairments are characterized by core neural dysfunctions in schizophrenia

    PubMed Central

    Gaebler, Arnim Johannes; Mathiak, Klaus; Koten, Jan Willem; König, Andrea Anna; Koush, Yury; Weyer, David; Depner, Conny; Matentzoglu, Simeon; Edgar, James Christopher; Willmes, Klaus; Zvyagintsev, Mikhail

    2015-01-01

    Major theories on the neural basis of schizophrenic core symptoms highlight aberrant salience network activity (insula and anterior cingulate cortex), prefrontal hypoactivation, sensory processing deficits as well as an impaired connectivity between temporal and prefrontal cortices. The mismatch negativity is a potential biomarker of schizophrenia and its reduction might be a consequence of each of these mechanisms. In contrast to the previous electroencephalographic studies, functional magnetic resonance imaging may disentangle the involved brain networks at high spatial resolution and determine contributions from localized brain responses and functional connectivity to the schizophrenic impairments. Twenty-four patients and 24 matched control subjects underwent functional magnetic resonance imaging during an optimized auditory mismatch task. Haemodynamic responses and functional connectivity were compared between groups. These data sets further entered a diagnostic classification analysis to assess impairments on the individual patient level. In the control group, mismatch responses were detected in the auditory cortex, prefrontal cortex and the salience network (insula and anterior cingulate cortex). Furthermore, mismatch processing was associated with a deactivation of the visual system and the dorsal attention network indicating a shift of resources from the visual to the auditory domain. The patients exhibited reduced activation in all of the respective systems (right auditory cortex, prefrontal cortex, and the salience network) as well as reduced deactivation of the visual system and the dorsal attention network. Group differences were most prominent in the anterior cingulate cortex and adjacent prefrontal areas. The latter regions also exhibited a reduced functional connectivity with the auditory cortex in the patients. In the classification analysis, haemodynamic responses yielded a maximal accuracy of 83% based on four features; functional connectivity data performed similarly or worse for up to about 10 features. However, connectivity data yielded a better performance when including more than 10 features yielding up to 90% accuracy. Among others, the most discriminating features represented functional connections between the auditory cortex and the anterior cingulate cortex as well as adjacent prefrontal areas. Auditory mismatch impairments incorporate major neural dysfunctions in schizophrenia. Our data suggest synergistic effects of sensory processing deficits, aberrant salience attribution, prefrontal hypoactivation as well as a disrupted connectivity between temporal and prefrontal cortices. These deficits are associated with subsequent disturbances in modality-specific resource allocation. Capturing different schizophrenic core dysfunctions, functional magnetic resonance imaging during this optimized mismatch paradigm reveals processing impairments on the individual patient level, rendering it a potential biomarker of schizophrenia. PMID:25743635

  20. Syllabic (~2-5 Hz) and fluctuation (~1-10 Hz) ranges in speech and auditory processing

    PubMed Central

    Edwards, Erik; Chang, Edward F.

    2013-01-01

    Given recent interest in syllabic rates (~2-5 Hz) for speech processing, we review the perception of “fluctuation” range (~1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (~2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. PMID:24035819

  1. Genetic Reduction of Matrix Metalloproteinase-9 Promotes Formation of Perineuronal Nets Around Parvalbumin-Expressing Interneurons and Normalizes Auditory Cortex Responses in Developing Fmr1 Knock-Out Mice.

    PubMed

    Wen, Teresa H; Afroz, Sonia; Reinhard, Sarah M; Palacios, Arnold R; Tapia, Kendal; Binder, Devin K; Razak, Khaleel A; Ethell, Iryna M

    2017-10-13

    Abnormal sensory responses associated with Fragile X Syndrome (FXS) and autism spectrum disorders include hypersensitivity and impaired habituation to repeated stimuli. Similar sensory deficits are also observed in adult Fmr1 knock-out (KO) mice and are reversed by genetic deletion of Matrix Metalloproteinase-9 (MMP-9) through yet unknown mechanisms. Here we present new evidence that impaired development of parvalbumin (PV)-expressing inhibitory interneurons may underlie hyper-responsiveness in auditory cortex of Fmr1 KO mice via MMP-9-dependent regulation of perineuronal nets (PNNs). First, we found that PV cell development and PNN formation around GABAergic interneurons were impaired in developing auditory cortex of Fmr1 KO mice. Second, MMP-9 levels were elevated in P12-P18 auditory cortex of Fmr1 KO mice and genetic reduction of MMP-9 to WT levels restored the formation of PNNs around PV cells. Third, in vivo single-unit recordings from auditory cortex neurons showed enhanced spontaneous and sound-driven responses in developing Fmr1 KO mice, which were normalized following genetic reduction of MMP-9. These findings indicate that elevated MMP-9 levels contribute to the development of sensory hypersensitivity by influencing formation of PNNs around PV interneurons suggesting MMP-9 as a new therapeutic target to reduce sensory deficits in FXS and potentially other autism spectrum disorders. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Tracking the evolution of crossmodal plasticity and visual functions before and after sight restoration

    PubMed Central

    Dormal, Giulia; Lepore, Franco; Harissi-Dagher, Mona; Albouy, Geneviève; Bertone, Armando; Rossion, Bruno

    2014-01-01

    Visual deprivation leads to massive reorganization in both the structure and function of the occipital cortex, raising crucial challenges for sight restoration. We tracked the behavioral, structural, and neurofunctional changes occurring in an early and severely visually impaired patient before and 1.5 and 7 mo after sight restoration with magnetic resonance imaging. Robust presurgical auditory responses were found in occipital cortex despite residual preoperative vision. In primary visual cortex, crossmodal auditory responses overlapped with visual responses and remained elevated even 7 mo after surgery. However, these crossmodal responses decreased in extrastriate occipital regions after surgery, together with improved behavioral vision and with increases in both gray matter density and neural activation in low-level visual regions. Selective responses in high-level visual regions involved in motion and face processing were observable even before surgery and did not evolve after surgery. Taken together, these findings demonstrate that structural and functional reorganization of occipital regions are present in an individual with a long-standing history of severe visual impairment and that such reorganizations can be partially reversed by visual restoration in adulthood. PMID:25520432

  3. A Brain System for Auditory Working Memory.

    PubMed

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  4. Auditory cortex of bats and primates: managing species-specific calls for social communication

    PubMed Central

    Kanwal, Jagmeet S.; Rauschecker, Josef P.

    2014-01-01

    Individuals of many animal species communicate with each other using sounds or “calls” that are made up of basic acoustic patterns and their combinations. We are interested in questions about the processing of communication calls and their representation within the mammalian auditory cortex. Our studies compare in particular two species for which a large body of data has accumulated: the mustached bat and the rhesus monkey. We conclude that the brains of both species share a number of functional and organizational principles, which differ only in the extent to which and how they are implemented. For instance, neurons in both species use “combination-sensitivity” (nonlinear spectral and temporal integration of stimulus components) as a basic mechanism to enable exquisite sensitivity to and selectivity for particular call types. Whereas combination-sensitivity is already found abundantly at the primary auditory cortical and also at subcortical levels in bats, it becomes prevalent only at the level of the lateral belt in the secondary auditory cortex of monkeys. A parallel-hierarchical framework for processing complex sounds up to the level of the auditory cortex in bats and an organization into parallel-hierarchical, cortico-cortical auditory processing streams in monkeys is another common principle. Response specialization of neurons seems to be more pronounced in bats than in monkeys, whereas a functional specialization into “what” and “where” streams in the cerebral cortex is more pronounced in monkeys than in bats. These differences, in part, are due to the increased number and larger size of auditory areas in the parietal and frontal cortex in primates. Accordingly, the computational prowess of neural networks and the functional hierarchy resulting in specializations is established early and accelerated across brain regions in bats. The principles proposed here for the neural “management” of species-specific calls in bats and primates can be tested by studying the details of call processing in additional species. Also, computational modeling in conjunction with coordinated studies in bats and monkeys can help to clarify the fundamental question of perceptual invariance (or “constancy”) in call recognition, which has obvious relevance for understanding speech perception and its disorders in humans. PMID:17485400

  5. Persistent Thalamic Sound Processing Despite Profound Cochlear Denervation.

    PubMed

    Chambers, Anna R; Salazar, Juan J; Polley, Daniel B

    2016-01-01

    Neurons at higher stages of sensory processing can partially compensate for a sudden drop in peripheral input through a homeostatic plasticity process that increases the gain on weak afferent inputs. Even after a profound unilateral auditory neuropathy where >95% of afferent synapses between auditory nerve fibers and inner hair cells have been eliminated with ouabain, central gain can restore cortical processing and perceptual detection of basic sounds delivered to the denervated ear. In this model of profound auditory neuropathy, auditory cortex (ACtx) processing and perception recover despite the absence of an auditory brainstem response (ABR) or brainstem acoustic reflexes, and only a partial recovery of sound processing at the level of the inferior colliculus (IC), an auditory midbrain nucleus. In this study, we induced a profound cochlear neuropathy with ouabain and asked whether central gain enabled a compensatory plasticity in the auditory thalamus comparable to the full recovery of function previously observed in the ACtx, the partial recovery observed in the IC, or something different entirely. Unilateral ouabain treatment in adult mice effectively eliminated the ABR, yet robust sound-evoked activity persisted in a minority of units recorded from the contralateral medial geniculate body (MGB) of awake mice. Sound driven MGB units could decode moderate and high-intensity sounds with accuracies comparable to sham-treated control mice, but low-intensity classification was near chance. Pure tone receptive fields and synchronization to broadband pulse trains also persisted, albeit with significantly reduced quality and precision, respectively. MGB decoding of temporally modulated pulse trains and speech tokens were both greatly impaired in ouabain-treated mice. Taken together, the absence of an ABR belied a persistent auditory processing at the level of the MGB that was likely enabled through increased central gain. Compensatory plasticity at the level of the auditory thalamus was less robust overall than previous observations in cortex or midbrain. Hierarchical differences in compensatory plasticity following sensorineural hearing loss may reflect differences in GABA circuit organization within the MGB, as compared to the ACtx or IC.

  6. Transient human auditory cortex activation during volitional attention shifting

    PubMed Central

    Uhlig, Christian Harm; Gutschalk, Alexander

    2017-01-01

    While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues. PMID:28273110

  7. Electrophysiological Evidence for the Sources of the Masking Level Difference

    ERIC Educational Resources Information Center

    Fowler, Cynthia G.

    2017-01-01

    Purpose: The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD). Method: A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used…

  8. Effect of low-frequency rTMS on electromagnetic tomography (LORETA) and regional brain metabolism (PET) in schizophrenia patients with auditory hallucinations.

    PubMed

    Horacek, Jiri; Brunovsky, Martin; Novak, Tomas; Skrdlantova, Lucie; Klirova, Monika; Bubenikova-Valesova, Vera; Krajca, Vladimir; Tislerova, Barbora; Kopecek, Milan; Spaniel, Filip; Mohr, Pavel; Höschl, Cyril

    2007-01-01

    Auditory hallucinations are characteristic symptoms of schizophrenia with high clinical importance. It was repeatedly reported that low frequency (

  9. Auditory Cortex Is Required for Fear Potentiation of Gap Detection

    PubMed Central

    Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.

    2014-01-01

    Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510

  10. Gap detection threshold in the rat before and after auditory cortex ablation.

    PubMed

    Syka, J; Rybalko, N; Mazelová, J; Druga, R

    2002-10-01

    Gap detection threshold (GDT) was measured in adult female pigmented rats (strain Long-Evans) by an operant conditioning technique with food reinforcement, before and after bilateral ablation of the auditory cortex. GDT was dependent on the frequency spectrum and intensity of the continuously present noise in which the gaps were embedded. The mean values of GDT for gaps embedded in white noise or low-frequency noise (upper cutoff frequency 3 kHz) at 70 dB sound pressure level (SPL) were 1.57+/-0.07 ms and 2.9+/-0.34 ms, respectively. Decreasing noise intensity from 80 dB SPL to 20 dB SPL produced a significant increase in GDT. The increase in GDT was relatively small in the range of 80-50 dB SPL for white noise and in the range of 80-60 dB for low-frequency noise. The minimal intensity level of the noise that enabled GDT measurement was 20 dB SPL for white noise and 30 dB SPL for low-frequency noise. Mean GDT values at these intensities were 10.6+/-3.9 ms and 31.3+/-4.2 ms, respectively. Bilateral ablation of the primary auditory cortex (complete destruction of the Te1 and partial destruction of the Te2 and Te3 areas) resulted in an increase in GDT values. The fifth day after surgery, the rats were able to detect gaps in the noise. The values of GDT observed at this time were 4.2+/-1.1 ms for white noise and 7.4+/-3.1 ms for low-frequency noise at 70 dB SPL. During the first month after cortical ablation, recovery of GDT was observed. However, 1 month after cortical ablation GDT still remained slightly higher than in controls (1.8+/-0.18 for white noise, 3.22+/-0.15 for low-frequency noise, P<0.05). A decrease in GDT values during the subsequent months was not observed.

  11. Effect of sound intensity on tonotopic fMRI maps in the unanesthetized monkey.

    PubMed

    Tanji, Kazuyo; Leopold, David A; Ye, Frank Q; Zhu, Charles; Malloy, Megan; Saunders, Richard C; Mishkin, Mortimer

    2010-01-01

    The monkey's auditory cortex includes a core region on the supratemporal plane (STP) made up of the tonotopically organized areas A1, R, and RT, together with a surrounding belt and a lateral parabelt region. The functional studies that yielded the tonotopic maps and corroborated the anatomical division into core, belt, and parabelt typically used low-amplitude pure tones that were often restricted to threshold-level intensities. Here we used functional magnetic resonance imaging in awake rhesus monkeys to determine whether, and if so how, the tonotopic maps and the pattern of activation in core, belt, and parabelt are affected by systematic changes in sound intensity. Blood oxygenation level-dependent (BOLD) responses to groups of low- and high-frequency pure tones 3-4 octaves apart were measured at multiple sound intensity levels. The results revealed tonotopic maps in the auditory core that reversed at the putative areal boundaries between A1 and R and between R and RT. Although these reversals of the tonotopic representations were present at all intensity levels, the lateral spread of activation depended on sound amplitude, with increasing recruitment of the adjacent belt areas as the intensities increased. Tonotopic organization along the STP was also evident in frequency-specific deactivation (i.e. "negative BOLD"), an effect that was intensity-specific as well. Regions of positive and negative BOLD were spatially interleaved, possibly reflecting lateral inhibition of high-frequency areas during activation of adjacent low-frequency areas, and vice versa. These results, which demonstrate the strong influence of tonal amplitude on activation levels, identify sound intensity as an important adjunct parameter for mapping the functional architecture of auditory cortex.

  12. Functional specialization of medial auditory belt cortex in the alert rhesus monkey.

    PubMed

    Kusmierek, Pawel; Rauschecker, Josef P

    2009-09-01

    Responses of neural units in two areas of the medial auditory belt (middle medial area [MM] and rostral medial area [RM]) were tested with tones, noise bursts, monkey calls (MC), and environmental sounds (ES) in microelectrode recordings from two alert rhesus monkeys. For comparison, recordings were also performed from two core areas (primary auditory area [A1] and rostral area [R]) of the auditory cortex. All four fields showed cochleotopic organization, with best (center) frequency [BF(c)] gradients running in opposite directions in A1 and MM than in R and RM. The medial belt was characterized by a stronger preference for band-pass noise than for pure tones found medially to the core areas. Response latencies were shorter for the two more posterior (middle) areas MM and A1 than for the two rostral areas R and RM, reaching values as low as 6 ms for high BF(c) in MM and A1, and strongly depended on BF(c). The medial belt areas exhibited a higher selectivity to all stimuli, in particular to noise bursts, than the core areas. An increased selectivity to tones and noise bursts was also found in the anterior fields; the opposite was true for highly temporally modulated ES. Analysis of the structure of neural responses revealed that neurons were driven by low-level acoustic features in all fields. Thus medial belt areas RM and MM have to be considered early stages of auditory cortical processing. The anteroposterior difference in temporal processing indices suggests that R and RM may belong to a different hierarchical level or a different computational network than A1 and MM.

  13. Anatomical Substrates of Visual and Auditory Miniature Second-language Learning

    PubMed Central

    Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.

    2007-01-01

    Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186

  14. Early continuous white noise exposure alters l-alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptor subunit glutamate receptor 2 and gamma-aminobutyric acid type a receptor subunit beta3 protein expression in rat auditory cortex.

    PubMed

    Xu, Jinghong; Yu, Liping; Zhang, Jiping; Cai, Rui; Sun, Xinde

    2010-02-15

    Auditory experience during the postnatal critical period is essential for the normal maturation of auditory function. Previous studies have shown that rearing infant rat pups under conditions of continuous moderate-level noise delayed the emergence of adult-like topographic representational order and the refinement of response selectivity in the primary auditory cortex (A1) beyond normal developmental benchmarks and indefinitely blocked the closure of a brief, critical-period window. To gain insight into the molecular mechanisms of these physiological changes after noise rearing, we studied expression of the AMPA receptor subunit GluR2 and GABA(A) receptor subunit beta3 in the auditory cortex after noise rearing. Our results show that continuous moderate-level noise rearing during the early stages of development decreases the expression levels of GluR2 and GABA(A)beta3. Furthermore, noise rearing also induced a significant decrease in the level of GABA(A) receptors relative to AMPA receptors. However, in adult rats, noise rearing did not have significant effects on GluR2 and GABA(A)beta3 expression or the ratio between the two units. These changes could have a role in the cellular mechanisms involved in the delayed maturation of auditory receptive field structure and topographic organization of A1 after noise rearing. Copyright 2009 Wiley-Liss, Inc.

  15. Electrophysiological Evidence for the Sources of the Masking Level Difference.

    PubMed

    Fowler, Cynthia G

    2017-08-16

    The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD). A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD. Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD. The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.

  16. Cortico-Cortical Connectivity Within Ferret Auditory Cortex.

    PubMed

    Bizley, Jennifer K; Bajo, Victoria M; Nodal, Fernando R; King, Andrew J

    2015-10-15

    Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency-matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non-overlapping, consistent with the non-tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. © 2015 Wiley Periodicals, Inc.

  17. A physiologically based model for temporal envelope encoding in human primary auditory cortex.

    PubMed

    Dugué, Pierre; Le Bouquin-Jeannès, Régine; Edeline, Jean-Marc; Faucon, Gérard

    2010-09-01

    Communication sounds exhibit temporal envelope fluctuations in the low frequency range (<70 Hz) and human speech has prominent 2-16 Hz modulations with a maximum at 3-4 Hz. Here, we propose a new phenomenological model of the human auditory pathway (from cochlea to primary auditory cortex) to simulate responses to amplitude-modulated white noise. To validate the model, performance was estimated by quantifying temporal modulation transfer functions (TMTFs). Previous models considered either the lower stages of the auditory system (up to the inferior colliculus) or only the thalamocortical loop. The present model, divided in two stages, is based on anatomical and physiological findings and includes the entire auditory pathway. The first stage, from the outer ear to the colliculus, incorporates inhibitory interneurons in the cochlear nucleus to increase performance at high stimuli levels. The second stage takes into account the anatomical connections of the thalamocortical system and includes the fast and slow excitatory and inhibitory currents. After optimizing the parameters of the model to reproduce the diversity of TMTFs obtained from human subjects, a patient-specific model was derived and the parameters were optimized to effectively reproduce both spontaneous activity and the oscillatory part of the evoked response. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  18. Auditory Cortex Basal Activity Modulates Cochlear Responses in Chinchillas

    PubMed Central

    León, Alex; Elgueda, Diego; Silva, María A.; Hamamé, Carlos M.; Delano, Paul H.

    2012-01-01

    Background The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. Methodology/Principal Findings Cochlear microphonics (CM), auditory-nerve compound action potentials (CAP) and auditory cortex evoked potentials (ACEP) were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments) and a permanent reduction in five chinchillas (lesion experiments). We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. Conclusions/Significance These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the obtained effects suggests that there are at least two functional pathways from the auditory cortex to the cochlea. PMID:22558383

  19. Plasticity of spatial hearing: behavioural effects of cortical inactivation

    PubMed Central

    Nodal, Fernando R; Bajo, Victoria M; King, Andrew J

    2012-01-01

    The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex. PMID:22547635

  20. Mouse auditory cortex differs from visual and somatosensory cortices in the laminar distribution of cytochrome oxidase and acetylcholinesterase.

    PubMed

    Anderson, L A; Christianson, G B; Linden, J F

    2009-02-03

    Cytochrome oxidase (CYO) and acetylcholinesterase (AChE) staining density varies across the cortical layers in many sensory areas. The laminar variations likely reflect differences between the layers in levels of metabolic activity and cholinergic modulation. The question of whether these laminar variations differ between primary sensory cortices has never been systematically addressed in the same set of animals, since most studies of sensory cortex focus on a single sensory modality. Here, we compared the laminar distribution of CYO and AChE activity in the primary auditory, visual, and somatosensory cortices of the mouse, using Nissl-stained sections to define laminar boundaries. Interestingly, for both CYO and AChE, laminar patterns of enzyme activity were similar in the visual and somatosensory cortices, but differed in the auditory cortex. In the visual and somatosensory areas, staining densities for both enzymes were highest in layers III/IV or IV and in lower layer V. In the auditory cortex, CYO activity showed a reliable peak only at the layer III/IV border, while AChE distribution was relatively homogeneous across layers. These results suggest that laminar patterns of metabolic activity and cholinergic influence are similar in the mouse visual and somatosensory cortices, but differ in the auditory cortex.

  1. Acoustic and higher-level representations of naturalistic auditory scenes in human auditory and frontal cortex.

    PubMed

    Hausfeld, Lars; Riecke, Lars; Formisano, Elia

    2018-06-01

    Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    PubMed

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  3. Reduced Glutamate Decarboxylase 65 Protein Within Primary Auditory Cortex Inhibitory Boutons in Schizophrenia

    PubMed Central

    Moyer, Caitlin E.; Delevich, Kristen M.; Fish, Kenneth N.; Asafu-Adjei, Josephine K.; Sampson, Allan R.; Dorph-Petersen, Karl-Anton; Lewis, David A.; Sweet, Robert A.

    2012-01-01

    Background Schizophrenia is associated with perceptual and physiological auditory processing impairments that may result from primary auditory cortex excitatory and inhibitory circuit pathology. High-frequency oscillations are important for auditory function and are often reported to be disrupted in schizophrenia. These oscillations may, in part, depend on upregulation of gamma-aminobutyric acid synthesis by glutamate decarboxylase 65 (GAD65) in response to high interneuron firing rates. It is not known whether levels of GAD65 protein or GAD65-expressing boutons are altered in schizophrenia. Methods We studied two cohorts of subjects with schizophrenia and matched control subjects, comprising 27 pairs of subjects. Relative fluorescence intensity, density, volume, and number of GAD65-immunoreactive boutons in primary auditory cortex were measured using quantitative confocal microscopy and stereologic sampling methods. Bouton fluorescence intensities were used to compare the relative expression of GAD65 protein within boutons between diagnostic groups. Additionally, we assessed the correlation between previously measured dendritic spine densities and GAD65-immunoreactive bouton fluorescence intensities. Results GAD65-immunoreactive bouton fluorescence intensity was reduced by 40% in subjects with schizophrenia and was correlated with previously measured reduced spine density. The reduction was greater in subjects who were not living independently at time of death. In contrast, GAD65-immunoreactive bouton density and number were not altered in deep layer 3 of primary auditory cortex of subjects with schizophrenia. Conclusions Decreased expression of GAD65 protein within inhibitory boutons could contribute to auditory impairments in schizophrenia. The correlated reductions in dendritic spines and GAD65 protein suggest a relationship between inhibitory and excitatory synapse pathology in primary auditory cortex. PMID:22624794

  4. Neural coding strategies in auditory cortex.

    PubMed

    Wang, Xiaoqin

    2007-07-01

    In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.

  5. Corticofugal modulation of peripheral auditory responses

    PubMed Central

    Terreros, Gonzalo; Delano, Paul H.

    2015-01-01

    The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647

  6. Speech sound discrimination training improves auditory cortex responses in a rat model of autism

    PubMed Central

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Kilgard, Michael P.

    2014-01-01

    Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA) increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field (AAF) responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes. PMID:25140133

  7. Cortico‐cortical connectivity within ferret auditory cortex

    PubMed Central

    Bajo, Victoria M.; Nodal, Fernando R.; King, Andrew J.

    2015-01-01

    ABSTRACT Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency‐matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non‐overlapping, consistent with the non‐tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. J. Comp. Neurol. 523:2187–2210, 2015. © 2015 Wiley Periodicals, Inc. PMID:25845831

  8. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    PubMed

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Deep transcranial magnetic stimulation add-on for the treatment of auditory hallucinations: a double-blind study

    PubMed Central

    2012-01-01

    Background About 25% of schizophrenia patients with auditory hallucinations are refractory to pharmacotherapy and electroconvulsive therapy. We conducted a deep transcranial magnetic stimulation (TMS) pilot study in order to evaluate the potential clinical benefit of repeated left temporoparietal cortex stimulation in these patients. The results were encouraging, but a sham-controlled study was needed to rule out a placebo effect. Methods A total of 18 schizophrenic patients with refractory auditory hallucinations were recruited, from Beer Yaakov MHC and other hospitals outpatient populations. Patients received 10 daily treatment sessions with low-frequency (1 Hz for 10 min) deep TMS applied over the left temporoparietal cortex, using the H1 coil at the intensity of 110% of the motor threshold. Procedure was either real or sham according to patient randomization. Patients were evaluated via the Auditory Hallucinations Rating Scale, Scale for the Assessment of Positive Symptoms-Negative Symptoms, Clinical Global Impressions, and Quality of Life Questionnaire. Results In all, 10 patients completed the treatment (10 TMS sessions). Auditory hallucination scores of both groups improved; however, there was no statistical difference in any of the scales between the active and the sham treated groups. Conclusions Low-frequency deep TMS to the left temporoparietal cortex using the protocol mentioned above has no statistically significant effect on auditory hallucinations or the other clinical scales measured in schizophrenic patients. Trial Registration Clinicaltrials.gov identifier: NCT00564096. PMID:22559192

  10. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  11. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia

    PubMed Central

    Dale, Corby L.; Brown, Ethan G.; Fisher, Melissa; Herman, Alexander B.; Dowling, Anne F.; Hinkley, Leighton B.; Subramaniam, Karuna; Nagarajan, Srikantan S.; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  12. Monosialotetrahexosylganglioside Inhibits the Expression of p-CREB and NR2B in the Auditory Cortex in Rats with Salicylate-Induced Tinnitus.

    PubMed

    Song, Rui-Biao; Lou, Wei-Hua

    2015-01-01

    This study investigated the effects of monosialotetrahexosylganglioside (GM1) on the expression of N-methyl-D-aspartate receptor subunit 2B (NR2B) and phosphorylated (p)-cyclic AMP response element-binding protein (CREB) in the auditory cortex of rats with tinnitus. Tinnitus-like behavior in rats was tested with the gap prepulse inhibition of acoustic startle paradigm. We then investigated the NR2B mRNA and protein and p-CREB protein levels in the auditory cortex of tinnitus rats compared with normal rats. Rats treated for 4 days with salicylate exhibited tinnitus. NR2B mRNA and protein and p-CREB protein levels were upregulated in these animals, with expression returning to normal levels 14 days after cessation of treatment; baseline levels of NR2B and p-CREB were also restored by GM1 administration. These data suggest that chronic salicylate administration induces tinnitus via upregulation of p-CREB and NR2B expression, and that GM1 can potentially be used to treat tinnitus.

  13. Cholecystokinin from the entorhinal cortex enables neural plasticity in the auditory cortex

    PubMed Central

    Li, Xiao; Yu, Kai; Zhang, Zicong; Sun, Wenjian; Yang, Zhou; Feng, Jingyu; Chen, Xi; Liu, Chun-Hua; Wang, Haitao; Guo, Yi Ping; He, Jufang

    2014-01-01

    Patients with damage to the medial temporal lobe show deficits in forming new declarative memories but can still recall older memories, suggesting that the medial temporal lobe is necessary for encoding memories in the neocortex. Here, we found that cortical projection neurons in the perirhinal and entorhinal cortices were mostly immunopositive for cholecystokinin (CCK). Local infusion of CCK in the auditory cortex of anesthetized rats induced plastic changes that enabled cortical neurons to potentiate their responses or to start responding to an auditory stimulus that was paired with a tone that robustly triggered action potentials. CCK infusion also enabled auditory neurons to start responding to a light stimulus that was paired with a noise burst. In vivo intracellular recordings in the auditory cortex showed that synaptic strength was potentiated after two pairings of presynaptic and postsynaptic activity in the presence of CCK. Infusion of a CCKB antagonist in the auditory cortex prevented the formation of a visuo-auditory association in awake rats. Finally, activation of the entorhinal cortex potentiated neuronal responses in the auditory cortex, which was suppressed by infusion of a CCKB antagonist. Together, these findings suggest that the medial temporal lobe influences neocortical plasticity via CCK-positive cortical projection neurons in the entorhinal cortex. PMID:24343575

  14. Perceptual load interacts with stimulus processing across sensory modalities.

    PubMed

    Klemen, J; Büchel, C; Rose, M

    2009-06-01

    According to perceptual load theory, processing of task-irrelevant stimuli is limited by the perceptual load of a parallel attended task if both the task and the irrelevant stimuli are presented to the same sensory modality. However, it remains a matter of debate whether the same principles apply to cross-sensory perceptual load and, more generally, what form cross-sensory attentional modulation in early perceptual areas takes in humans. Here we addressed these questions using functional magnetic resonance imaging. Participants undertook an auditory one-back working memory task of low or high perceptual load, while concurrently viewing task-irrelevant images at one of three object visibility levels. The processing of the visual and auditory stimuli was measured in the lateral occipital cortex (LOC) and auditory cortex (AC), respectively. Cross-sensory interference with sensory processing was observed in both the LOC and AC, in accordance with previous results of unisensory perceptual load studies. The present neuroimaging results therefore warrant the extension of perceptual load theory from a unisensory to a cross-sensory context: a validation of this cross-sensory interference effect through behavioural measures would consolidate the findings.

  15. Cortical pitch regions in humans respond primarily to resolved harmonics and are located in specific tonotopic regions of anterior auditory cortex.

    PubMed

    Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H

    2013-12-11

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.

  16. Cortical Pitch Regions in Humans Respond Primarily to Resolved Harmonics and Are Located in Specific Tonotopic Regions of Anterior Auditory Cortex

    PubMed Central

    Kanwisher, Nancy; McDermott, Josh H.

    2013-01-01

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce “resolved” peaks of excitation in the cochlea, whereas others are “unresolved,” providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior. PMID:24336712

  17. Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution.

    PubMed

    Berlot, Eva; Formisano, Elia; De Martino, Federico

    2018-05-23

    Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels. SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits. Copyright © 2018 the authors 0270-6474/18/384934-09$15.00/0.

  18. Representations of Pitch and Timbre Variation in Human Auditory Cortex

    PubMed Central

    2017-01-01

    Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions. PMID:28025255

  19. Segregating the neural correlates of physical and perceived change in auditory input using the change deafness effect.

    PubMed

    Puschmann, Sebastian; Weerda, Riklef; Klump, Georg; Thiel, Christiane M

    2013-05-01

    Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.

  20. Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.

    PubMed

    Meredith, M Alex; Allman, Brian L

    2015-03-01

    The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. Tonic effects of the dopaminergic ventral midbrain on the auditory cortex of awake macaque monkeys.

    PubMed

    Huang, Ying; Mylius, Judith; Scheich, Henning; Brosch, Michael

    2016-03-01

    This study shows that ongoing electrical stimulation of the dopaminergic ventral midbrain can modify neuronal activity in the auditory cortex of awake primates for several seconds. This was reflected in a decrease of the spontaneous firing and in a bidirectional modification of the power of auditory evoked potentials. We consider that both effects are due to an increase in the dopamine tone in auditory cortex induced by the electrical stimulation. Thus, the dopaminergic ventral midbrain may contribute to the tonic activity in auditory cortex that has been proposed to be involved in associating events of auditory tasks (Brosch et al. Hear Res 271:66-73, 2011) and may modulate the signal-to-noise ratio of the responses to auditory stimuli.

  2. Self-Regulation of the Primary Auditory Cortex Attention Via Directed Attention Mediated By Real Time fMRI Neurofeedback

    DTIC Science & Technology

    2017-05-05

    Directed Attention Mediated by Real -Time fMRI Neurofeedback presented at/published to 2017 Radiological Society of North America Conference in...DATE Sherwood - p.1 Self-regulation of the primary auditory cortex attention via directed attention mediated by real -time fMRI neurofeedback M S...auditory cortex hyperactivity by self-regulation of the primary auditory cortex (A 1) based on real -time functional magnetic resonance imaging neurofeedback

  3. Auditory motion-specific mechanisms in the primate brain

    PubMed Central

    Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.

    2017-01-01

    This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038

  4. Sound-level-dependent representation of frequency modulations in human auditory cortex: a low-noise fMRI study.

    PubMed

    Brechmann, André; Baumgart, Frank; Scheich, Henning

    2002-01-01

    Recognition of sound patterns must be largely independent of level and of masking or jamming background sounds. Auditory patterns of relevance in numerous environmental sounds, species-specific vocalizations and speech are frequency modulations (FM). Level-dependent activation of the human auditory cortex (AC) in response to a large set of upward and downward FM tones was studied with low-noise (48 dB) functional magnetic resonance imaging at 3 Tesla. Separate analysis in four territories of AC was performed in each individual brain using a combination of anatomical landmarks and spatial activation criteria for their distinction. Activation of territory T1b (including primary AC) showed the most robust level dependence over the large range of 48-102 dB in terms of activated volume and blood oxygen level dependent contrast (BOLD) signal intensity. The left nonprimary territory T2 also showed a good correlation of level with activated volume but, in contrast to T1b, not with BOLD signal intensity. These findings are compatible with level coding mechanisms observed in animal AC. A systematic increase of activation with level was not observed for T1a (anterior of Heschl's gyrus) and T3 (on the planum temporale). Thus these areas might not be specifically involved in processing of the overall intensity of FM. The rostral territory T1a of the left hemisphere exhibited highest activation when the FM sound level fell 12 dB below scanner noise. This supports the previously suggested special involvement of this territory in foreground-background decomposition tasks. Overall, AC of the left hemisphere showed a stronger level-dependence of signal intensity and activated volume than the right hemisphere. But any side differences of signal intensity at given levels were lateralized to right AC. This might point to an involvement of the right hemisphere in more specific aspects of FM processing than level coding.

  5. Different forms of effective connectivity in primate frontotemporal pathways.

    PubMed

    Petkov, Christopher I; Kikuchi, Yukiko; Milne, Alice E; Mishkin, Mortimer; Rauschecker, Josef P; Logothetis, Nikos K

    2015-01-23

    It is generally held that non-primary sensory regions of the brain have a strong impact on frontal cortex. However, the effective connectivity of pathways to frontal cortex is poorly understood. Here we microstimulate sites in the superior temporal and ventral frontal cortex of monkeys and use functional magnetic resonance imaging to evaluate the functional activity resulting from the stimulation of interconnected regions. Surprisingly, we find that, although certain earlier stages of auditory cortical processing can strongly activate frontal cortex, downstream auditory regions, such as voice-sensitive cortex, appear to functionally engage primarily an ipsilateral temporal lobe network. Stimulating other sites within this activated temporal lobe network shows strong activation of frontal cortex. The results indicate that the relative stage of sensory processing does not predict the level of functional access to the frontal lobes. Rather, certain brain regions engage local networks, only parts of which have a strong functional impact on frontal cortex.

  6. Different forms of effective connectivity in primate frontotemporal pathways

    PubMed Central

    Petkov, Christopher I.; Kikuchi, Yukiko; Milne, Alice E.; Mishkin, Mortimer; Rauschecker, Josef P.; Logothetis, Nikos K.

    2015-01-01

    It is generally held that non-primary sensory regions of the brain have a strong impact on frontal cortex. However, the effective connectivity of pathways to frontal cortex is poorly understood. Here we microstimulate sites in the superior temporal and ventral frontal cortex of monkeys and use functional magnetic resonance imaging to evaluate the functional activity resulting from the stimulation of interconnected regions. Surprisingly, we find that, although certain earlier stages of auditory cortical processing can strongly activate frontal cortex, downstream auditory regions, such as voice-sensitive cortex, appear to functionally engage primarily an ipsilateral temporal lobe network. Stimulating other sites within this activated temporal lobe network shows strong activation of frontal cortex. The results indicate that the relative stage of sensory processing does not predict the level of functional access to the frontal lobes. Rather, certain brain regions engage local networks, only parts of which have a strong functional impact on frontal cortex. PMID:25613079

  7. Spatial processing in the auditory cortex of the macaque monkey

    NASA Astrophysics Data System (ADS)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  8. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  9. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  10. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    PubMed

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  11. Auditory connections and functions of prefrontal cortex

    PubMed Central

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  12. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  13. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway.

    PubMed

    Yao, Justin D; Bremen, Peter; Middlebrooks, John C

    2015-12-09

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in three levels of the ascending auditory pathway with extracellular unit recordings in anesthetized rats. We found that neural SSS emerges within the ascending auditory pathway as a consequence of sharpening of spatial sensitivity and increasing forward suppression. Our results highlight brainstem mechanisms that culminate in SSS at the level of the auditory cortex. Copyright © 2015 Yao et al.

  14. The Corticofugal Effects of Auditory Cortex Microstimulation on Auditory Nerve and Superior Olivary Complex Responses Are Mediated via Alpha-9 Nicotinic Receptor Subunit

    PubMed Central

    Aedo, Cristian; Terreros, Gonzalo; León, Alex; Delano, Paul H.

    2016-01-01

    Background and Objective The auditory efferent system is a complex network of descending pathways, which mainly originate in the primary auditory cortex and are directed to several auditory subcortical nuclei. These descending pathways are connected to olivocochlear neurons, which in turn make synapses with auditory nerve neurons and outer hair cells (OHC) of the cochlea. The olivocochlear function can be studied using contralateral acoustic stimulation, which suppresses auditory nerve and cochlear responses. In the present work, we tested the proposal that the corticofugal effects that modulate the strength of the olivocochlear reflex on auditory nerve responses are produced through cholinergic synapses between medial olivocochlear (MOC) neurons and OHCs via alpha-9/10 nicotinic receptors. Methods We used wild type (WT) and alpha-9 nicotinic receptor knock-out (KO) mice, which lack cholinergic transmission between MOC neurons and OHC, to record auditory cortex evoked potentials and to evaluate the consequences of auditory cortex electrical microstimulation in the effects produced by contralateral acoustic stimulation on auditory brainstem responses (ABR). Results Auditory cortex evoked potentials at 15 kHz were similar in WT and KO mice. We found that auditory cortex microstimulation produces an enhancement of contralateral noise suppression of ABR waves I and III in WT mice but not in KO mice. On the other hand, corticofugal modulations of wave V amplitudes were significant in both genotypes. Conclusion These findings show that the corticofugal modulation of contralateral acoustic suppressions of auditory nerve (ABR wave I) and superior olivary complex (ABR wave III) responses are mediated through MOC synapses. PMID:27195498

  15. The onset of visual experience gates auditory cortex critical periods

    PubMed Central

    Mowery, Todd M.; Kotak, Vibhakar C.; Sanes, Dan H.

    2016-01-01

    Sensory systems influence one another during development and deprivation can lead to cross-modal plasticity. As auditory function begins before vision, we investigate the effect of manipulating visual experience during auditory cortex critical periods (CPs) by assessing the influence of early, normal and delayed eyelid opening on hearing loss-induced changes to membrane and inhibitory synaptic properties. Early eyelid opening closes the auditory cortex CPs precociously and dark rearing prevents this effect. In contrast, delayed eyelid opening extends the auditory cortex CPs by several additional days. The CP for recovery from hearing loss is also closed prematurely by early eyelid opening and extended by delayed eyelid opening. Furthermore, when coupled with transient hearing loss that animals normally fully recover from, very early visual experience leads to inhibitory deficits that persist into adulthood. Finally, we demonstrate a functional projection from the visual to auditory cortex that could mediate these effects. PMID:26786281

  16. Degraded Auditory Processing in a Rat Model of Autism Limits the Speech Representation in Non-primary Auditory Cortex

    PubMed Central

    Engineer, C.T.; Centanni, T.M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R.S.; Wilson, L.G.; Kilgard, M.P.

    2014-01-01

    Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. PMID:24639033

  17. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex.

    PubMed

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2017-05-01

    A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Information fusion via isocortex-based Area 37 modeling

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    A simplified model of information processing in the brain can be constructed using primary sensory input from two modalities (auditory and visual) and recurrent connections to the limbic subsystem. Information fusion would then occur in Area 37 of the temporal cortex. The creation of meta concepts from the low order primary inputs is managed by models of isocortex processing. Isocortex algorithms are used to model parietal (auditory), occipital (visual), temporal (polymodal fusion) cortex and the limbic system. Each of these four modules is constructed out of five cortical stacks in which each stack consists of three vertically oriented six layer isocortex models. The input to output training of each cortical model uses the OCOS (on center - off surround) and FFP (folded feedback pathway) circuitry of (Grossberg, 1) which is inherently a recurrent network type of learning characterized by the identification of perceptual groups. Models of this sort are thus closely related to cognitive models as it is difficult to divorce the sensory processing subsystems from the higher level processing in the associative cortex. The overall software architecture presented is biologically based and is presented as a potential architectural prototype for the development of novel sensory fusion strategies. The algorithms are motivated to some degree by specific data from projects on musical composition and autonomous fine art painting programs, but only in the sense that these projects use two specific types of auditory and visual cortex data. Hence, the architectures are presented for an artificial information processing system which utilizes two disparate sensory sources. The exact nature of the two primary sensory input streams is irrelevant.

  19. Out-of-synchrony speech entrainment in developmental dyslexia.

    PubMed

    Molinaro, Nicola; Lizarazu, Mikel; Lallier, Marie; Bourguignon, Mathieu; Carreiras, Manuel

    2016-08-01

    Developmental dyslexia is a reading disorder often characterized by reduced awareness of speech units. Whether the neural source of this phonological disorder in dyslexic readers results from the malfunctioning of the primary auditory system or damaged feedback communication between higher-order phonological regions (i.e., left inferior frontal regions) and the auditory cortex is still under dispute. Here we recorded magnetoencephalographic (MEG) signals from 20 dyslexic readers and 20 age-matched controls while they were listening to ∼10-s-long spoken sentences. Compared to controls, dyslexic readers had (1) an impaired neural entrainment to speech in the delta band (0.5-1 Hz); (2) a reduced delta synchronization in both the right auditory cortex and the left inferior frontal gyrus; and (3) an impaired feedforward functional coupling between neural oscillations in the right auditory cortex and the left inferior frontal regions. This shows that during speech listening, individuals with developmental dyslexia present reduced neural synchrony to low-frequency speech oscillations in primary auditory regions that hinders higher-order speech processing steps. The present findings, thus, strengthen proposals assuming that improper low-frequency acoustic entrainment affects speech sampling. This low speech-brain synchronization has the strong potential to cause severe consequences for both phonological and reading skills. Interestingly, the reduced speech-brain synchronization in dyslexic readers compared to normal readers (and its higher-order consequences across the speech processing network) appears preserved through the development from childhood to adulthood. Thus, the evaluation of speech-brain synchronization could possibly serve as a diagnostic tool for early detection of children at risk of dyslexia. Hum Brain Mapp 37:2767-2783, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex.

    PubMed

    Kajikawa, Yoshinao; Smiley, John F; Schroeder, Charles E

    2017-10-18

    Prior studies have reported "local" field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be "contaminated" by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such "far-field" activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is problematic as the default assumption is that FPs originate from local activity, and thus are termed "local" (LFP). We examine this general problem in the context of previously reported face-evoked FPs in macaque auditory cortex. Our findings suggest that face-FPs are indeed generated in the underlying inferotemporal cortex and volume-conducted to the auditory cortex. The note of caution raised by these findings is of particular importance for studies that seek to assign FP/LFP recordings to specific cortical layers. Copyright © 2017 the authors 0270-6474/17/3710139-15$15.00/0.

  1. Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex

    PubMed Central

    Smiley, John F.; Schroeder, Charles E.

    2017-01-01

    Prior studies have reported “local” field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be “contaminated” by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such “far-field” activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is problematic as the default assumption is that FPs originate from local activity, and thus are termed “local” (LFP). We examine this general problem in the context of previously reported face-evoked FPs in macaque auditory cortex. Our findings suggest that face-FPs are indeed generated in the underlying inferotemporal cortex and volume-conducted to the auditory cortex. The note of caution raised by these findings is of particular importance for studies that seek to assign FP/LFP recordings to specific cortical layers. PMID:28924008

  2. High-Field Functional Imaging of Pitch Processing in Auditory Cortex of the Cat

    PubMed Central

    Butler, Blake E.; Hall, Amee J.; Lomber, Stephen G.

    2015-01-01

    The perception of pitch is a widely studied and hotly debated topic in human hearing. Many of these studies combine functional imaging techniques with stimuli designed to disambiguate the percept of pitch from frequency information present in the stimulus. While useful in identifying potential “pitch centres” in cortex, the existence of truly pitch-responsive neurons requires single neuron-level measures that can only be undertaken in animal models. While a number of animals have been shown to be sensitive to pitch, few studies have addressed the location of cortical generators of pitch percepts in non-human models. The current study uses high-field functional magnetic resonance imaging (fMRI) of the feline brain in an attempt to identify regions of cortex that show increased activity in response to pitch-evoking stimuli. Cats were presented with iterated rippled noise (IRN) stimuli, narrowband noise stimuli with the same spectral profile but no perceivable pitch, and a processed IRN stimulus in which phase components were randomized to preserve slowly changing modulations in the absence of pitch (IRNo). Pitch-related activity was not observed to occur in either primary auditory cortex (A1) or the anterior auditory field (AAF) which comprise the core auditory cortex in cats. Rather, cortical areas surrounding the posterior ectosylvian sulcus responded preferentially to the IRN stimulus when compared to narrowband noise, with group analyses revealing bilateral activity centred in the posterior auditory field (PAF). This study demonstrates that fMRI is useful for identifying pitch-related processing in cat cortex, and identifies cortical areas that warrant further investigation. Moreover, we have taken the first steps in identifying a useful animal model for the study of pitch perception. PMID:26225563

  3. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    PubMed

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  4. The auditory cross-section (AXS) test battery: A new way to study afferent/efferent relations linking body periphery (ear, voice, heart) with brainstem and cortex

    NASA Astrophysics Data System (ADS)

    Lauter, Judith

    2002-05-01

    Several noninvasive methods are available for studying the neural bases of human sensory-motor function, but their cost is prohibitive for many researchers and clinicians. The auditory cross section (AXS) test battery utilizes relatively inexpensive methods, yet yields data that are at least equivalent, if not superior in some applications, to those generated by more expensive technologies. The acronym emphasizes access to axes-the battery makes it possible to assess dynamic physiological relations along all three body-brain axes: rostro-caudal (afferent/efferent), dorso-ventral, and right-left, on an individually-specific basis, extending from cortex to the periphery. For auditory studies, a three-level physiological ear-to-cortex profile is generated, utilizing (1) quantitative electroencephalography (qEEG); (2) the repeated evoked potentials version of the auditory brainstem response (REPs/ABR); and (3) otoacoustic emissions (OAEs). Battery procedures will be explained, and sample data presented illustrating correlated multilevel changes in ear, voice, heart, brainstem, and cortex in response to circadian rhythms, and challenges with substances such as antihistamines and Ritalin. Potential applications for the battery include studies of central auditory processing, reading problems, hyperactivity, neural bases of voice and speech motor control, neurocardiology, individually-specific responses to medications, and the physiological bases of tinnitus, hyperacusis, and related treatments.

  5. Topography of sound level representation in the FM sweep selective region of the pallid bat auditory cortex.

    PubMed

    Measor, Kevin; Yarrow, Stuart; Razak, Khaleel A

    2018-05-26

    Sound level processing is a fundamental function of the auditory system. To determine how the cortex represents sound level, it is important to quantify how changes in level alter the spatiotemporal structure of cortical ensemble activity. This is particularly true for echolocating bats that have control over, and often rapidly adjust, call level to actively change echo level. To understand how cortical activity may change with sound level, here we mapped response rate and latency changes with sound level in the auditory cortex of the pallid bat. The pallid bat uses a 60-30 kHz downward frequency modulated (FM) sweep for echolocation. Neurons tuned to frequencies between 30 and 70 kHz in the auditory cortex are selective for the properties of FM sweeps used in echolocation forming the FM sweep selective region (FMSR). The FMSR is strongly selective for sound level between 30 and 50 dB SPL. Here we mapped the topography of level selectivity in the FMSR using downward FM sweeps and show that neurons with more monotonic rate level functions are located in caudomedial regions of the FMSR overlapping with high frequency (50-60 kHz) neurons. Non-monotonic neurons dominate the FMSR, and are distributed across the entire region, but there is no evidence for amplitopy. We also examined how first spike latency of FMSR neurons change with sound level. The majority of FMSR neurons exhibit paradoxical latency shift wherein the latency increases with sound level. Moreover, neurons with paradoxical latency shifts are more strongly level selective and are tuned to lower sound level than neurons in which latencies decrease with level. These data indicate a clustered arrangement of neurons according to monotonicity, with no strong evidence for finer scale topography, in the FMSR. The latency analysis suggests mechanisms for strong level selectivity that is based on relative timing of excitatory and inhibitory inputs. Taken together, these data suggest how the spatiotemporal spread of cortical activity may represent sound level. Copyright © 2018. Published by Elsevier B.V.

  6. Altered Brain Functional Activity in Infants with Congenital Bilateral Severe Sensorineural Hearing Loss: A Resting-State Functional MRI Study under Sedation.

    PubMed

    Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen

    2017-01-01

    Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.

  7. Early auditory processing in area V5/MT+ of the congenitally blind brain.

    PubMed

    Watkins, Kate E; Shakespeare, Timothy J; O'Donoghue, M Clare; Alexander, Iona; Ragge, Nicola; Cowey, Alan; Bridge, Holly

    2013-11-13

    Previous imaging studies of congenital blindness have studied individuals with heterogeneous causes of blindness, which may influence the nature and extent of cross-modal plasticity. Here, we scanned a homogeneous group of blind people with bilateral congenital anophthalmia, a condition in which both eyes fail to develop, and, as a result, the visual pathway is not stimulated by either light or retinal waves. This model of congenital blindness presents an opportunity to investigate the effects of very early visual deafferentation on the functional organization of the brain. In anophthalmic animals, the occipital cortex receives direct subcortical auditory input. We hypothesized that this pattern of subcortical reorganization ought to result in a topographic mapping of auditory frequency information in the occipital cortex of anophthalmic people. Using functional MRI, we examined auditory-evoked activity to pure tones of high, medium, and low frequencies. Activity in the superior temporal cortex was significantly reduced in anophthalmic compared with sighted participants. In the occipital cortex, a region corresponding to the cytoarchitectural area V5/MT+ was activated in the anophthalmic participants but not in sighted controls. Whereas previous studies in the blind indicate that this cortical area is activated to auditory motion, our data show it is also active for trains of pure tone stimuli and in some anophthalmic participants shows a topographic mapping (tonotopy). Therefore, this region appears to be performing early sensory processing, possibly served by direct subcortical input from the pulvinar to V5/MT+.

  8. Encoding of Natural Sounds at Multiple Spectral and Temporal Resolutions in the Human Auditory Cortex

    PubMed Central

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Goebel, Rainer; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    2014-01-01

    Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex. PMID:24391486

  9. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  10. Reboxetine Improves Auditory Attention and Increases Norepinephrine Levels in the Auditory Cortex of Chronically Stressed Rats

    PubMed Central

    Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies

    2016-01-01

    Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle-treated animals. These results indicate that NE has a key role in A1 and attention of stressed rats during tone discrimination. PMID:28082872

  11. The auditory and non-auditory brain areas involved in tinnitus. An emergent property of multiple parallel overlapping subnetworks

    PubMed Central

    Vanneste, Sven; De Ridder, Dirk

    2012-01-01

    Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375

  12. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    PubMed

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.

  13. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    PubMed

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  14. Selective Neuronal Activation by Cochlear Implant Stimulation in Auditory Cortex of Awake Primate

    PubMed Central

    Johnson, Luke A.; Della Santina, Charles C.

    2016-01-01

    Despite the success of cochlear implants (CIs) in human populations, most users perform poorly in noisy environments and music and tonal language perception. How CI devices engage the brain at the single neuron level has remained largely unknown, in particular in the primate brain. By comparing neuronal responses with acoustic and CI stimulation in marmoset monkeys unilaterally implanted with a CI electrode array, we discovered that CI stimulation was surprisingly ineffective at activating many neurons in auditory cortex, particularly in the hemisphere ipsilateral to the CI. Further analyses revealed that the CI-nonresponsive neurons were narrowly tuned to frequency and sound level when probed with acoustic stimuli; such neurons likely play a role in perceptual behaviors requiring fine frequency and level discrimination, tasks that CI users find especially challenging. These findings suggest potential deficits in central auditory processing of CI stimulation and provide important insights into factors responsible for poor CI user performance in a wide range of perceptual tasks. SIGNIFICANCE STATEMENT The cochlear implant (CI) is the most successful neural prosthetic device to date and has restored hearing in hundreds of thousands of deaf individuals worldwide. However, despite its huge successes, CI users still face many perceptual limitations, and the brain mechanisms involved in hearing through CI devices remain poorly understood. By directly comparing single-neuron responses to acoustic and CI stimulation in auditory cortex of awake marmoset monkeys, we discovered that neurons unresponsive to CI stimulation were sharply tuned to frequency and sound level. Our results point out a major deficit in central auditory processing of CI stimulation and provide important insights into mechanisms underlying the poor CI user performance in a wide range of perceptual tasks. PMID:27927962

  15. Right anterior superior temporal activation predicts auditory sentence comprehension following aphasic stroke.

    PubMed

    Crinion, Jenny; Price, Cathy J

    2005-12-01

    Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.

  16. Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model

    PubMed Central

    Nakao, Kazuhito; Nakazawa, Kazu

    2014-01-01

    In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691

  17. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  19. Differential responses of primary auditory cortex in autistic spectrum disorder with auditory hypersensitivity.

    PubMed

    Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako

    2012-01-25

    The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.

  20. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    PubMed Central

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  1. How silent is silent reading? Intracerebral evidence for top-down activation of temporal voice areas during reading.

    PubMed

    Perrone-Bertolotti, Marcela; Kujala, Jan; Vidal, Juan R; Hamame, Carlos M; Ossandon, Tomas; Bertrand, Olivier; Minotti, Lorella; Kahane, Philippe; Jerbi, Karim; Lachaux, Jean-Philippe

    2012-12-05

    As you might experience it while reading this sentence, silent reading often involves an imagery speech component: we can hear our own "inner voice" pronouncing words mentally. Recent functional magnetic resonance imaging studies have associated that component with increased metabolic activity in the auditory cortex, including voice-selective areas. It remains to be determined, however, whether this activation arises automatically from early bottom-up visual inputs or whether it depends on late top-down control processes modulated by task demands. To answer this question, we collaborated with four epileptic human patients recorded with intracranial electrodes in the auditory cortex for therapeutic purposes, and measured high-frequency (50-150 Hz) "gamma" activity as a proxy of population level spiking activity. Temporal voice-selective areas (TVAs) were identified with an auditory localizer task and monitored as participants viewed words flashed on screen. We compared neural responses depending on whether words were attended or ignored and found a significant increase of neural activity in response to words, strongly enhanced by attention. In one of the patients, we could record that response at 800 ms in TVAs, but also at 700 ms in the primary auditory cortex and at 300 ms in the ventral occipital temporal cortex. Furthermore, single-trial analysis revealed a considerable jitter between activation peaks in visual and auditory cortices. Altogether, our results demonstrate that the multimodal mental experience of reading is in fact a heterogeneous complex of asynchronous neural responses, and that auditory and visual modalities often process distinct temporal frames of our environment at the same time.

  2. "When Music Speaks": Auditory Cortex Morphology as a Neuroanatomical Marker of Language Aptitude and Musicality.

    PubMed

    Turker, Sabrina; Reiterer, Susanne M; Seither-Preisler, Annemarie; Schneider, Peter

    2017-01-01

    Recent research has shown that the morphology of certain brain regions may indeed correlate with a number of cognitive skills such as musicality or language ability. The main aim of the present study was to explore the extent to which foreign language aptitude, in particular phonetic coding ability, is influenced by the morphology of Heschl's gyrus (HG; auditory cortex), working memory capacity, and musical ability. In this study, the auditory cortices of German-speaking individuals ( N = 30; 13 males/17 females; aged 20-40 years) with high and low scores in a number of language aptitude tests were compared. The subjects' language aptitude was measured by three different tests, namely a Hindi speech imitation task (phonetic coding ability), an English pronunciation assessment, and the Modern Language Aptitude Test (MLAT). Furthermore, working memory capacity and musical ability were assessed to reveal their relationship with foreign language aptitude. On the behavioral level, significant correlations were found between phonetic coding ability, English pronunciation skills, musical experience, and language aptitude as measured by the MLAT. Parts of all three tests measuring language aptitude correlated positively and significantly with each other, supporting their validity for measuring components of language aptitude. Remarkably, the number of instruments played by subjects showed significant correlations with all language aptitude measures and musicality, whereas, the number of foreign languages did not show any correlations. With regard to the neuroanatomy of auditory cortex, adults with very high scores in the Hindi testing and the musicality test (AMMA) demonstrated a clear predominance of complete posterior HG duplications in the right hemisphere. This may reignite the discussion of the importance of the right hemisphere for language processing, especially when linked or common resources are involved, such as the inter-dependency between phonetic and musical aptitude.

  3. Differential Expression of Phosphorylated Mitogen-Activated Protein Kinase (pMAPK) in the Lateral Amygdala of Mice Selectively Bred for High and Low Fear

    DTIC Science & Technology

    2013-07-02

    amygdala induced by hippocampal formation stimulation in vivo. The Journal of neuroscience: the official journal of the Society for Neuroscience 15...6 Figure 1.3. Schematic model of the neural circuitry of Pavlovian auditory fear conditioning. Model shows how an auditory conditioned...stimulus and a nociceptive unconditioned foot shock stimulus converge in the lateral amygdala (LA) via auditory thalamus and cortex and somatosensory

  4. Level-tolerant duration selectivity in the auditory cortex of the velvety free-tailed bat Molossus molossus.

    PubMed

    Macías, Silvio; Hernández-Abad, Annette; Hechavarría, Julio C; Kössl, Manfred; Mora, Emanuel C

    2015-05-01

    It has been reported previously that in the inferior colliculus of the bat Molossus molossus, neuronal duration tuning is ambiguous because the tuning type of the neurons dramatically changes with the sound level. In the present study, duration tuning was examined in the auditory cortex of M. molossus to describe if it is as ambiguous as the collicular tuning. From a population of 174 cortical 104 (60 %) neurons did not show duration selectivity (all-pass). Around 5 % (9 units) responded preferentially to stimuli having longer durations showing long-pass duration response functions, 35 (20 %) responded to a narrow range of stimulus durations showing band-pass duration response functions, 24 (14 %) responded most strongly to short stimulus durations showing short-pass duration response functions and two neurons (1 %) responded best to two different stimulus durations showing a two-peaked duration-response function. The majority of neurons showing short- (16 out of 24) and band-pass (24 out 35) selectivity displayed "O-shaped" duration response areas. In contrast to the inferior colliculus, duration tuning in the auditory cortex of M. molossus appears level tolerant. That is, the type of duration selectivity and the stimulus duration eliciting the maximum response were unaffected by changing sound level.

  5. Direct Recordings of Pitch Responses from Human Auditory Cortex

    PubMed Central

    Griffiths, Timothy D.; Kumar, Sukhbinder; Sedley, William; Nourski, Kirill V.; Kawasaki, Hiroto; Oya, Hiroyuki; Patterson, Roy D.; Brugge, John F.; Howard, Matthew A.

    2010-01-01

    Summary Pitch is a fundamental percept with a complex relationship to the associated sound structure [1]. Pitch perception requires brain representation of both the structure of the stimulus and the pitch that is perceived. We describe direct recordings of local field potentials from human auditory cortex made while subjects perceived the transition between noise and a noise with a regular repetitive structure in the time domain at the millisecond level called regular-interval noise (RIN) [2]. RIN is perceived to have a pitch when the rate is above the lower limit of pitch [3], at approximately 30 Hz. Sustained time-locked responses are observed to be related to the temporal regularity of the stimulus, commonly emphasized as a relevant stimulus feature in models of pitch perception (e.g., [1]). Sustained oscillatory responses are also demonstrated in the high gamma range (80–120 Hz). The regularity responses occur irrespective of whether the response is associated with pitch perception. In contrast, the oscillatory responses only occur for pitch. Both responses occur in primary auditory cortex and adjacent nonprimary areas. The research suggests that two types of pitch-related activity occur in humans in early auditory cortex: time-locked neural correlates of stimulus regularity and an oscillatory response related to the pitch percept. PMID:20605456

  6. Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.

    PubMed

    Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza

    2012-02-01

    Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.

  7. Population responses in primary auditory cortex simultaneously represent the temporal envelope and periodicity features in natural speech.

    PubMed

    Abrams, Daniel A; Nicol, Trent; White-Schwoch, Travis; Zecker, Steven; Kraus, Nina

    2017-05-01

    Speech perception relies on a listener's ability to simultaneously resolve multiple temporal features in the speech signal. Little is known regarding neural mechanisms that enable the simultaneous coding of concurrent temporal features in speech. Here we show that two categories of temporal features in speech, the low-frequency speech envelope and periodicity cues, are processed by distinct neural mechanisms within the same population of cortical neurons. We measured population activity in primary auditory cortex of anesthetized guinea pig in response to three variants of a naturally produced sentence. Results show that the envelope of population responses closely tracks the speech envelope, and this cortical activity more closely reflects wider bandwidths of the speech envelope compared to narrow bands. Additionally, neuronal populations represent the fundamental frequency of speech robustly with phase-locked responses. Importantly, these two temporal features of speech are simultaneously observed within neuronal ensembles in auditory cortex in response to clear, conversation, and compressed speech exemplars. Results show that auditory cortical neurons are adept at simultaneously resolving multiple temporal features in extended speech sentences using discrete coding mechanisms. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Decoding sound level in the marmoset primary auditory cortex.

    PubMed

    Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L

    2017-10-01

    Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.

  9. Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.

    PubMed

    Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly

    2012-05-01

    Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas normally seen for vision. The differences in cortical organization between bilateral anophthalmia and other forms of congenital blindness are considered to be due to the total absence of stimulation in 'visual' cortex by light or retinal activity in the former condition, and suggests development of subcortical auditory input to the geniculo-striate pathway.

  10. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  11. Auditory perception vs. recognition: representation of complex communication sounds in the mouse auditory cortical fields.

    PubMed

    Geissler, Diana B; Ehret, Günter

    2004-02-01

    Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.

  12. Development of neural responsivity to vocal sounds in higher level auditory cortex of songbirds

    PubMed Central

    Miller-Sims, Vanessa C.

    2014-01-01

    Like humans, songbirds learn vocal sounds from “tutors” during a sensitive period of development. Vocal learning in songbirds therefore provides a powerful model system for investigating neural mechanisms by which memories of learned vocal sounds are stored. This study examined whether NCM (caudo-medial nidopallium), a region of higher level auditory cortex in songbirds, serves as a locus where a neural memory of tutor sounds is acquired during early stages of vocal learning. NCM neurons respond well to complex auditory stimuli, and evoked activity in many NCM neurons habituates such that the response to a stimulus that is heard repeatedly decreases to approximately one-half its original level (stimulus-specific adaptation). The rate of neural habituation serves as an index of familiarity, being low for familiar sounds, but high for novel sounds. We found that response strength across different song stimuli was higher in NCM neurons of adult zebra finches than in juveniles, and that only adult NCM responded selectively to tutor song. The rate of habituation across both tutor song and novel conspecific songs was lower in adult than in juvenile NCM, indicating higher familiarity and a more persistent response to song stimuli in adults. In juvenile birds that have memorized tutor vocal sounds, neural habituation was higher for tutor song than for a familiar conspecific song. This unexpected result suggests that the response to tutor song in NCM at this age may be subject to top-down influences that maintain the tutor song as a salient stimulus, despite its high level of familiarity. PMID:24694936

  13. Precision rodent whisker stimulator with integrated servo-locked control and displacement measurement.

    PubMed

    Walker, Jennifer L; Monjaraz-Fuentes, Fernanda; Pedrow, Christi R; Rector, David M

    2011-03-15

    We developed a high speed voice coil based whisker stimulator that delivers precise deflections of a single whisker or group of whiskers in a repeatable manner. The device is miniature, quiet, and inexpensive to build. Multiple stimulators fit together for independent stimulation of four or more whiskers. The system can be used with animals under anesthesia as well as awake animals with head-restraint, and does not require trimming the whiskers. The system can deliver 1-2 mm deflections in 2 ms resulting in velocities up to 900 mm/s to attain a wide range of evoked responses. Since auditory artifacts can influence behavioral studies using whisker stimulation, we tested potential effects of auditory noise by recording somatosensory evoked potentials (SEP) with varying auditory click levels, and with/without 80 dBa background white noise. We found that auditory clicks as low as 40 dBa significantly influence the SEP. With background white noise, auditory clicks as low as 50 dBa were still detected in components of the SEP. For behavioral studies where animals must learn to respond to whisker stimulation, these sounds must be minimized. Together, the stimulator and data system can be used for psychometric vigilance tasks, mapping of the barrel cortex and other electrophysiological paradigms. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Age-related changes in mitochondrial antioxidant enzyme Trx2 and TXNIP-Trx2-ASK1 signal pathways in the auditory cortex of a mimetic aging rat model: changes to Trx2 in the auditory cortex.

    PubMed

    Sun, Hai-Ying; Hu, Yu-Juan; Zhao, Xue-Yan; Zhong, Yi; Zeng, Ling-Ling; Chen, Xu-Bo; Yuan, Jie; Wu, Jing; Sun, Yu; Kong, Wen; Kong, Wei-Jia

    2015-07-01

    Age-associated degeneration in the central auditory system, which is defined as central presbycusis, can impair sound localization and speech perception. Research has shown that oxidative stress plays a central role in the pathological process of central presbycusis. Thioredoxin 2 (Trx2), one member of thioredoxin family, plays a key role in regulating the homeostasis of cellular reactive oxygen species and anti-apoptosis. The purpose of this study was to explore the association between Trx2 and the phenotype of central presbycusis using a mimetic aging animal model induced by long-term exposure to d-galactose (d-Gal). We also explored changes in thioredoxin-interacting protein (TXNIP), apoptosis signal regulating kinase 1 (ASK1) and phosphorylated ASK1 (p-ASK1) expression, as well as the Trx2-TXNIP/Trx2-ASK1 binding complex in the auditory cortex of mimetic aging rats. Our results demonstrate that, compared with control groups, the levels of Trx2 and Trx2-ASK1 binding complex were significantly reduced, whereas TXNIP, ASK1 p-ASK1 expression, and Trx2-TXNIP binding complex were significantly increased in the auditory cortex of the mimetic aging groups. Our results indicated that changes in Trx2 and the TXNIP-Trx2-ASK1 signal pathway may participate in the pathogenesis of central presbycusis. © 2015 FEBS.

  15. Thalamic connections of the core auditory cortex and rostral supratemporal plane in the macaque monkey.

    PubMed

    Scott, Brian H; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C

    2017-11-01

    In the primate auditory cortex, information flows serially in the mediolateral dimension from core, to belt, to parabelt. In the caudorostral dimension, stepwise serial projections convey information through the primary, rostral, and rostrotemporal (AI, R, and RT) core areas on the supratemporal plane, continuing to the rostrotemporal polar area (RTp) and adjacent auditory-related areas of the rostral superior temporal gyrus (STGr) and temporal pole. In addition to this cascade of corticocortical connections, the auditory cortex receives parallel thalamocortical projections from the medial geniculate nucleus (MGN). Previous studies have examined the projections from MGN to auditory cortex, but most have focused on the caudal core areas AI and R. In this study, we investigated the full extent of connections between MGN and AI, R, RT, RTp, and STGr using retrograde and anterograde anatomical tracers. Both AI and R received nearly 90% of their thalamic inputs from the ventral subdivision of the MGN (MGv; the primary/lemniscal auditory pathway). By contrast, RT received only ∼45% from MGv, and an equal share from the dorsal subdivision (MGd). Area RTp received ∼25% of its inputs from MGv, but received additional inputs from multisensory areas outside the MGN (30% in RTp vs. 1-5% in core areas). The MGN input to RTp distinguished this rostral extension of auditory cortex from the adjacent auditory-related cortex of the STGr, which received 80% of its thalamic input from multisensory nuclei (primarily medial pulvinar). Anterograde tracers identified complementary descending connections by which highly processed auditory information may modulate thalamocortical inputs. © 2017 Wiley Periodicals, Inc.

  16. Deep transcranial magnetic stimulation for the treatment of auditory hallucinations: a preliminary open-label study

    PubMed Central

    2011-01-01

    Background Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS) over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Patients and methods Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel) ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session) to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS) as well as the Scale for the Assessment of Positive Symptoms scores (SAPS), Clinical Global Impressions (CGI) scale, and the Scale for Assessment of Negative Symptoms (SANS). Results This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2%) and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%). Conclusions In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. Trial registration This trial is registered with clinicaltrials.gov (identifier: NCT00564096). PMID:21303566

  17. Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.

    PubMed

    Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S

    2013-05-01

    Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex.

    PubMed

    Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M

    2006-10-25

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.

  19. Integration of Auditory and Visual Communication Information in the Primate Ventrolateral Prefrontal Cortex

    PubMed Central

    Sugihara, Tadashi; Diltz, Mark D.; Averbeck, Bruno B.; Romanski, Lizabeth M.

    2009-01-01

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O’Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication. PMID:17065454

  20. Speech comprehension aided by multiple modalities: behavioural and neural interactions

    PubMed Central

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K.

    2014-01-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. PMID:22266262

  1. Speech comprehension aided by multiple modalities: behavioural and neural interactions.

    PubMed

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K

    2012-04-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy

    PubMed Central

    Skouras, Stavros; Lohmann, Gabriele

    2018-01-01

    Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142

  3. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    PubMed

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The what, where and how of auditory-object perception.

    PubMed

    Bizley, Jennifer K; Cohen, Yale E

    2013-10-01

    The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.

  5. The what, where and how of auditory-object perception

    PubMed Central

    Bizley, Jennifer K.; Cohen, Yale E.

    2014-01-01

    The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177

  6. Hearing loss in older adults affects neural systems supporting speech comprehension.

    PubMed

    Peelle, Jonathan E; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur

    2011-08-31

    Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.

  7. Hearing loss in older adults affects neural systems supporting speech comprehension

    PubMed Central

    Peelle, Jonathan E.; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur

    2011-01-01

    Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging (fMRI) to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry (VBM), demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task. PMID:21880924

  8. Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations

    PubMed Central

    Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.

    2009-01-01

    Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102

  9. Blast-induced tinnitus and hyperactivity in the auditory cortex of rats.

    PubMed

    Luo, Hao; Pace, Edward; Zhang, Jinsheng

    2017-01-06

    Blast exposure can cause tinnitus and hearing impairment by damaging the auditory periphery and direct impact to the brain, which trigger neural plasticity in both auditory and non-auditory centers. However, the underlying neurophysiological mechanisms of blast-induced tinnitus are still unknown. In this study, we induced tinnitus in rats using blast exposure and investigated changes in spontaneous firing and bursting activity in the auditory cortex (AC) at one day, one month, and three months after blast exposure. Our results showed that spontaneous activity in the tinnitus-positive group began changing at one month after blast exposure, and manifested as robust hyperactivity at all frequency regions at three months after exposure. We also observed an increased bursting rate in the low-frequency region at one month after blast exposure and in all frequency regions at three months after exposure. Taken together, spontaneous firing and bursting activity in the AC played an important role in blast-induced chronic tinnitus as opposed to acute tinnitus, thus favoring a bottom-up mechanism. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. [Expression of NR2A in rat auditory cortex after sound insulation and auditory plasticity].

    PubMed

    Xia, Yin; Long, Haishan; Han, Demin; Gong, Shusheng; Lei, Li; Shi, Jinfeng; Fan, Erzhong; Li, Ying; Zhao, Qing

    2009-06-01

    To study the changes of N-methyl-D-aspartate (NMDA) receptor subunit 2A (NR2A) expression at local synapses in auditory cortices after early postnatal sound insulation and tone exposure. We prepared highly purified synaptosomes from primary auditory cortex by Optiprep flotation gradient centrifugations, and compared the differences of NR2A expression in sound insulation PND14, PND28, PND42 and Tone exposure after sound insulation for 7 days by Western blotting. The results showed that the NR2A protein expression of PND14 and PND28 decreased significantly (P<0.05). Tone exposure after sound insulation for 7 days, mSIe NR2A protein level increased significantly (P<0.05). It showed bidirectional regulation of NR2A protein. No significant effects of sound insulation and lone exposure were found on the relative expression level of NR2A of PND42 (P>0.05). The results indicate that sound insulation and experience can modify the protein expression level of NR2A during the critical period of rat postnatal development. These findings provide important data for the study on the mechanisms of the developmental plasticity of sensory functions.

  11. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture

    PubMed Central

    2017-01-01

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238

  12. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    PubMed

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  13. Neural mechanisms underlying auditory feedback control of speech

    PubMed Central

    Reilly, Kevin J.; Guenther, Frank H.

    2013-01-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557

  14. Preservation of Auditory P300-Like Potentials in Cortical Deafness

    PubMed Central

    Cavinato, Marianna; Rigon, Jessica; Volpato, Chiara; Semenza, Carlo; Piccione, Francesco

    2012-01-01

    The phenomenon of blindsight has been largely studied and refers to residual abilities of blind patients without an acknowledged visual awareness. Similarly, “deaf hearing” might represent a further example of dissociation between detection and perception of sounds. Here we report the rare case of a patient with a persistent and complete cortical deafness caused by damage to the bilateral temporo-parietal lobes who occasionally showed unexpected reactions to environmental sounds despite she denied hearing. We applied for the first time electrophysiological techniques to better understand auditory processing and perceptual awareness of the patient. While auditory brainstem responses were within normal limits, no middle- and long-latency waveforms could be identified. However, event-related potentials showed conflicting results. While the Mismatch Negativity could not be evoked, robust P3-like waveforms were surprisingly found in the latency range of 600–700 ms. The generation of P3-like potentials, despite extensive destruction of the auditory cortex, might imply the integrity of independent circuits necessary to process auditory stimuli even in the absence of consciousness of sound. Our results support the reverse hierarchy theory that asserts that the higher levels of the hierarchy are immediately available for perception, while low-level information requires more specific conditions. The accurate characterization in terms of anatomy and neurophysiology of the auditory lesions might facilitate understanding of the neural substrates involved in deaf-hearing. PMID:22272260

  15. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    PubMed

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  16. Continuous vs. intermittent neurofeedback to regulate auditory cortex activity of tinnitus patients using real-time fMRI - A pilot study.

    PubMed

    Emmert, Kirsten; Kopel, Rotem; Koush, Yury; Maire, Raphael; Senn, Pascal; Van De Ville, Dimitri; Haller, Sven

    2017-01-01

    The emerging technique of real-time fMRI neurofeedback trains individuals to regulate their own brain activity via feedback from an fMRI measure of neural activity. Optimum feedback presentation has yet to be determined, particularly when working with clinical populations. To this end, we compared continuous against intermittent feedback in subjects with tinnitus. Fourteen participants with tinnitus completed the whole experiment consisting of nine runs (3 runs × 3 days). Prior to the neurofeedback, the target region was localized within the auditory cortex using auditory stimulation (1 kHz tone pulsating at 6 Hz) in an ON-OFF block design. During neurofeedback runs, participants received either continuous (n = 7, age 46.84 ± 12.01, Tinnitus Functional Index (TFI) 49.43 ± 15.70) or intermittent feedback (only after the regulation block) (n = 7, age 47.42 ± 12.39, TFI 49.82 ± 20.28). Participants were asked to decrease auditory cortex activity that was presented to them by a moving bar. In the first and the last session, participants also underwent arterial spin labeling (ASL) and resting-state fMRI imaging. We assessed tinnitus severity using the TFI questionnaire before all sessions, directly after all sessions and six weeks after all sessions. We then compared neuroimaging results from neurofeedback using a general linear model (GLM) and region-of-interest analysis as well as behavior measures employing a repeated-measures ANOVA. In addition, we looked at the seed-based connectivity of the auditory cortex using resting-state data and the cerebral blood flow using ASL data. GLM group analysis revealed that a considerable part of the target region within the auditory cortex was significantly deactivated during neurofeedback. When comparing continuous and intermittent feedback groups, the continuous group showed a stronger deactivation of parts of the target region, specifically the secondary auditory cortex. This result was confirmed in the region-of-interest analysis that showed a significant down-regulation effect for the continuous but not the intermittent group. Additionally, continuous feedback led to a slightly stronger effect over time while intermittent feedback showed best results in the first session. Behaviorally, there was no significant effect on the total TFI score, though on a descriptive level TFI scores tended to decrease after all sessions and in the six weeks follow up in the continuous group. Seed-based connectivity with a fixed-effects analysis revealed that functional connectivity increased over sessions in the posterior cingulate cortex, premotor area and part of the insula when looking at all patients while cerebral blood flow did not change significantly over time. Overall, these results show that continuous feedback is suitable for long-term neurofeedback experiments while intermittent feedback presentation promises good results for single session experiments when using the auditory cortex as a target region. In particular, the down-regulation effect is more pronounced in the secondary auditory cortex, which might be more susceptible to voluntary modulation in comparison to a primary sensory region.

  17. Control of Biosonar Behavior by the Auditory Cortex

    DTIC Science & Technology

    1988-11-28

    TITLE (include Security Classification) Control of Biosonar Behavior by the Auditory Cortex 12. PERSONAL AUTHOR(S) Nobuo Suga and Stephen Gaioni 13a...NOTATION 17. COSATI CODES IS SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP1 SUB-GROUP - biosonar ; echolocation...SLesion experiments were conducted to examine whether the functional organization of the mustached bat’s auditory cortex is related to biosonar

  18. Prefrontal cortex based sex differences in tinnitus perception: same tinnitus intensity, same tinnitus distress, different mood.

    PubMed

    Vanneste, Sven; Joos, Kathleen; De Ridder, Dirk

    2012-01-01

    Tinnitus refers to auditory phantom sensation. It is estimated that for 2% of the population this auditory phantom percept severely affects the quality of life, due to tinnitus related distress. Although the overall distress levels do not differ between sexes in tinnitus, females are more influenced by distress than males. Typically, pain, sleep, and depression are perceived as significantly more severe by female tinnitus patients. Studies on gender differences in emotional regulation indicate that females with high depressive symptoms show greater attention to emotion, and use less anti-rumination emotional repair strategies than males. The objective of this study was to verify whether the activity and connectivity of the resting brain is different for male and female tinnitus patients using resting-state EEG. Females had a higher mean score than male tinnitus patients on the BDI-II. Female tinnitus patients differ from male tinnitus patients in the orbitofrontal cortex (OFC) extending to the frontopolar cortex in beta1 and beta2. The OFC is important for emotional processing of sounds. Increased functional alpha connectivity is found between the OFC, insula, subgenual anterior cingulate (sgACC), parahippocampal (PHC) areas and the auditory cortex in females. Our data suggest increased functional connectivity that binds tinnitus-related auditory cortex activity to auditory emotion-related areas via the PHC-sgACC connections resulting in a more depressive state even though the tinnitus intensity and tinnitus-related distress are not different from men. Comparing male tinnitus patients to a control group of males significant differences could be found for beta3 in the posterior cingulate cortex (PCC). The PCC might be related to cognitive and memory-related aspects of the tinnitus percept. Our results propose that sex influences in tinnitus research cannot be ignored and should be taken into account in functional imaging studies related to tinnitus.

  19. Tracing the neural basis of auditory entrainment.

    PubMed

    Lehmann, Alexandre; Arias, Diana Jimena; Schönwiesner, Marc

    2016-11-19

    Neurons in the auditory cortex synchronize their responses to temporal regularities in sound input. This coupling or "entrainment" is thought to facilitate beat extraction and rhythm perception in temporally structured sounds, such as music. As a consequence of such entrainment, the auditory cortex responds to an omitted (silent) sound in a regular sequence. Although previous studies suggest that the auditory brainstem frequency-following response (FFR) exhibits some of the beat-related effects found in the cortex, it is unknown whether omissions of sounds evoke a brainstem response. We simultaneously recorded cortical and brainstem responses to isochronous and irregular sequences of consonant-vowel syllable /da/ that contained sporadic omissions. The auditory cortex responded strongly to omissions, but we found no evidence of evoked responses to omitted stimuli from the auditory brainstem. However, auditory brainstem responses in the isochronous sound sequence were more consistent across trials than in the irregular sequence. These results indicate that the auditory brainstem faithfully encodes short-term acoustic properties of a stimulus and is sensitive to sequence regularity, but does not entrain to isochronous sequences sufficiently to generate overt omission responses, even for sequences that evoke such responses in the cortex. These findings add to our understanding of the processing of sound regularities, which is an important aspect of human cognitive abilities like rhythm, music and speech perception. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. The cortical representation of the speech envelope is earlier for audiovisual speech than audio speech.

    PubMed

    Crosse, Michael J; Lalor, Edmund C

    2014-04-01

    Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG data and show that the latency at which the envelope of natural speech is represented in cortex is shortened by >10 ms when continuous audiovisual speech is presented compared with audio-only speech. In addition, we use a reverse-mapping approach to reconstruct an estimate of the speech stimulus from the EEG data and, by comparing the bimodal estimate with the sum of the unimodal estimates, find no evidence of any nonlinear additive effects in the audiovisual speech condition. These findings point to an underlying mechanism that could account for enhanced comprehension during audiovisual speech. Specifically, we hypothesize that low-level acoustic features that are temporally coherent with the preceding visual stream may be synthesized into a speech object at an earlier latency, which may provide an extended period of low-level processing before extraction of semantic information.

  1. Plasticity in the Developing Auditory Cortex: Evidence from Children with Sensorineural Hearing Loss and Auditory Neuropathy Spectrum Disorder

    PubMed Central

    Cardon, Garrett; Campbell, Julia; Sharma, Anu

    2013-01-01

    The developing auditory cortex is highly plastic. As such, the cortex is both primed to mature normally and at risk for re-organizing abnormally, depending upon numerous factors that determine central maturation. From a clinical perspective, at least two major components of development can be manipulated: 1) input to the cortex and 2) the timing of cortical input. Children with sensorineural hearing loss (SNHL) and auditory neuropathy spectrum disorder (ANSD) have provided a model of early deprivation of sensory input to the cortex, and demonstrated the resulting plasticity and development that can occur upon introduction of stimulation. In this article, we review several fundamental principles of cortical development and plasticity and discuss the clinical applications in children with SNHL and ANSD who receive intervention with hearing aids and/or cochlear implants. PMID:22668761

  2. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex

    PubMed Central

    Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.

    2009-01-01

    ‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492

  3. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    PubMed

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from sound-level distributions with different modes (15 vs 45 dB). Auditory cortex neurons adapted to sound-level statistics in younger and older adults, but adaptation was incomplete in older people. The data suggest that the aging auditory system does not fully capitalize on the statistics available in sound environments to tune the perceptual system dynamically. Copyright © 2018 the authors 0270-6474/18/381989-11$15.00/0.

  4. Emergent selectivity for task-relevant stimuli in higher-order auditory cortex

    PubMed Central

    Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.

    2014-01-01

    A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467

  5. Acetylcholinesterase Inhibition and Information Processing in the Auditory Cortex

    DTIC Science & Technology

    1986-04-30

    9,24,29,30), or for causing auditory hallucinations (2,23,31,32). Thus, compounds which alter cho- linergic transmission, in particular anticholinesterases...the upper auditory system. Thus, attending to and understanding verbal messages in humans, irrespective of the particular voice which speaks them, may...00, AD ACETYLCHOLINESTERASE INHIBITION AND INFORMATION PROCESSING IN THE AUDITORY CORTEX ANNUAL SUMMARY REPORT DTIC ELECTENORMAN M

  6. Prestimulus Network Integration of Auditory Cortex Predisposes Near-Threshold Perception Independently of Local Excitability

    PubMed Central

    Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan

    2015-01-01

    An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799

  7. Visual cortex entrains to sign language.

    PubMed

    Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel

    2017-06-13

    Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at [Formula: see text]1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.

  8. Cross-modal reorganization in cochlear implant users: Auditory cortex contributes to visual face processing.

    PubMed

    Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan

    2015-11-01

    There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex

    PubMed Central

    Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2010-01-01

    Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments. PMID:20079343

  10. Two-Photon Functional Imaging of the Auditory Cortex in Behaving Mice: From Neural Networks to Single Spines.

    PubMed

    Li, Ruijie; Wang, Meng; Yao, Jiwei; Liang, Shanshan; Liao, Xiang; Yang, Mengke; Zhang, Jianxiong; Yan, Junan; Jia, Hongbo; Chen, Xiaowei; Li, Xingyi

    2018-01-01

    In vivo two-photon Ca 2+ imaging is a powerful tool for recording neuronal activities during perceptual tasks and has been increasingly applied to behaving animals for acute or chronic experiments. However, the auditory cortex is not easily accessible to imaging because of the abundant temporal muscles, arteries around the ears and their lateral locations. Here, we report a protocol for two-photon Ca 2+ imaging in the auditory cortex of head-fixed behaving mice. By using a custom-made head fixation apparatus and a head-rotated fixation procedure, we achieved two-photon imaging and in combination with targeted cell-attached recordings of auditory cortical neurons in behaving mice. Using synthetic Ca 2+ indicators, we recorded the Ca 2+ transients at multiple scales, including neuronal populations, single neurons, dendrites and single spines, in auditory cortex during behavior. Furthermore, using genetically encoded Ca 2+ indicators (GECIs), we monitored the neuronal dynamics over days throughout the process of associative learning. Therefore, we achieved two-photon functional imaging at multiple scales in auditory cortex of behaving mice, which extends the tool box for investigating the neural basis of audition-related behaviors.

  11. Two-Photon Functional Imaging of the Auditory Cortex in Behaving Mice: From Neural Networks to Single Spines

    PubMed Central

    Li, Ruijie; Wang, Meng; Yao, Jiwei; Liang, Shanshan; Liao, Xiang; Yang, Mengke; Zhang, Jianxiong; Yan, Junan; Jia, Hongbo; Chen, Xiaowei; Li, Xingyi

    2018-01-01

    In vivo two-photon Ca2+ imaging is a powerful tool for recording neuronal activities during perceptual tasks and has been increasingly applied to behaving animals for acute or chronic experiments. However, the auditory cortex is not easily accessible to imaging because of the abundant temporal muscles, arteries around the ears and their lateral locations. Here, we report a protocol for two-photon Ca2+ imaging in the auditory cortex of head-fixed behaving mice. By using a custom-made head fixation apparatus and a head-rotated fixation procedure, we achieved two-photon imaging and in combination with targeted cell-attached recordings of auditory cortical neurons in behaving mice. Using synthetic Ca2+ indicators, we recorded the Ca2+ transients at multiple scales, including neuronal populations, single neurons, dendrites and single spines, in auditory cortex during behavior. Furthermore, using genetically encoded Ca2+ indicators (GECIs), we monitored the neuronal dynamics over days throughout the process of associative learning. Therefore, we achieved two-photon functional imaging at multiple scales in auditory cortex of behaving mice, which extends the tool box for investigating the neural basis of audition-related behaviors. PMID:29740289

  12. Primary Auditory Cortex Regulates Threat Memory Specificity

    ERIC Educational Resources Information Center

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  13. Oxytocin Enables Maternal Behavior by Balancing Cortical Inhibition

    PubMed Central

    Marlin, Bianca J.; Mitre, Mariela; D’amour, James A.; Chao, Moses V.; Froemke, Robert C.

    2015-01-01

    Oxytocin is important for social interactions and maternal behavior. However, little is known about when, where, and how oxytocin modulates neural circuits to improve social cognition. Here we show how oxytocin enables pup retrieval behavior in female mice by enhancing auditory cortical pup call responses. Retrieval behavior required left but not right auditory cortex, was accelerated by oxytocin in left auditory cortex, and oxytocin receptors were preferentially expressed in left auditory cortex. Neural responses to pup calls were lateralized, with co-tuned and temporally-precise excitatory and inhibitory responses in left cortex of maternal but not pup-naive adults. Finally, pairing calls with oxytocin enhanced responses by balancing the magnitude and timing of inhibition with excitation. Our results describe fundamental synaptic mechanisms by which oxytocin increases the salience of acoustic social stimuli. Furthermore, oxytocin-induced plasticity provides a biological basis for lateralization of auditory cortical processing. PMID:25874674

  14. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  15. Differential Coding of Conspecific Vocalizations in the Ventral Auditory Cortical Stream

    PubMed Central

    Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.

    2014-01-01

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  16. Neural correlates of short-term memory in primate auditory cortex

    PubMed Central

    Bigelow, James; Rossi, Breein; Poremba, Amy

    2014-01-01

    Behaviorally-relevant sounds such as conspecific vocalizations are often available for only a brief amount of time; thus, goal-directed behavior frequently depends on auditory short-term memory (STM). Despite its ecological significance, the neural processes underlying auditory STM remain poorly understood. To investigate the role of the auditory cortex in STM, single- and multi-unit activity was recorded from the primary auditory cortex (A1) of two monkeys performing an auditory STM task using simple and complex sounds. Each trial consisted of a sample and test stimulus separated by a 5-s retention interval. A brief wait period followed the test stimulus, after which subjects pressed a button if the sounds were identical (match trials) or withheld button presses if they were different (non-match trials). A number of units exhibited significant changes in firing rate for portions of the retention interval, although these changes were rarely sustained. Instead, they were most frequently observed during the early and late portions of the retention interval, with inhibition being observed more frequently than excitation. At the population level, responses elicited on match trials were briefly suppressed early in the sound period relative to non-match trials. However, during the latter portion of the sound, firing rates increased significantly for match trials and remained elevated throughout the wait period. Related patterns of activity were observed in prior experiments from our lab in the dorsal temporal pole (dTP) and prefrontal cortex (PFC) of the same animals. The data suggest that early match suppression occurs in both A1 and the dTP, whereas later match enhancement occurs first in the PFC, followed by A1 and later in dTP. Because match enhancement occurs first in the PFC, we speculate that enhancement observed in A1 and dTP may reflect top–down feedback. Overall, our findings suggest that A1 forms part of the larger neural system recruited during auditory STM. PMID:25177266

  17. Hearing in noisy environments: noise invariance and contrast gain control

    PubMed Central

    Willmore, Ben D B; Cooke, James E; King, Andrew J

    2014-01-01

    Contrast gain control has recently been identified as a fundamental property of the auditory system. Electrophysiological recordings in ferrets have shown that neurons continuously adjust their gain (their sensitivity to change in sound level) in response to the contrast of sounds that are heard. At the level of the auditory cortex, these gain changes partly compensate for changes in sound contrast. This means that sounds which are structurally similar, but have different contrasts, have similar neuronal representations in the auditory cortex. As a result, the cortical representation is relatively invariant to stimulus contrast and robust to the presence of noise in the stimulus. In the inferior colliculus (an important subcortical auditory structure), gain changes are less reliably compensatory, suggesting that contrast- and noise-invariant representations are constructed gradually as one ascends the auditory pathway. In addition to noise invariance, contrast gain control provides a variety of computational advantages over static neuronal representations; it makes efficient use of neuronal dynamic range, may contribute to redundancy-reducing, sparse codes for sound and allows for simpler decoding of population responses. The circuits underlying auditory contrast gain control are still under investigation. As in the visual system, these circuits may be modulated by factors other than stimulus contrast, forming a potential neural substrate for mediating the effects of attention as well as interactions between the senses. PMID:24907308

  18. Phonological Processing in Human Auditory Cortical Fields

    PubMed Central

    Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.

    2011-01-01

    We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252

  19. Congenital deafness affects deep layers in primary and secondary auditory cortex

    PubMed Central

    Berger, Christoph; Kühne, Daniela; Scheper, Verena

    2017-01-01

    Abstract Congenital deafness leads to functional deficits in the auditory cortex for which early cochlear implantation can effectively compensate. Most of these deficits have been demonstrated functionally. Furthermore, the majority of previous studies on deafness have involved the primary auditory cortex; knowledge of higher‐order areas is limited to effects of cross‐modal reorganization. In this study, we compared the cortical cytoarchitecture of four cortical areas in adult hearing and congenitally deaf cats (CDCs): the primary auditory field A1, two secondary auditory fields, namely the dorsal zone and second auditory field (A2); and a reference visual association field (area 7) in the same section stained either using Nissl or SMI‐32 antibodies. The general cytoarchitectonic pattern and the area‐specific characteristics in the auditory cortex remained unchanged in animals with congenital deafness. Whereas area 7 did not differ between the groups investigated, all auditory fields were slightly thinner in CDCs, this being caused by reduced thickness of layers IV–VI. The study documents that, while the cytoarchitectonic patterns are in general independent of sensory experience, reduced layer thickness is observed in both primary and higher‐order auditory fields in layer IV and infragranular layers. The study demonstrates differences in effects of congenital deafness between supragranular and other cortical layers, but similar dystrophic effects in all investigated auditory fields. PMID:28643417

  20. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    PubMed Central

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  1. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    PubMed

    Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  2. GABA(A) receptors in visual and auditory cortex and neural activity changes during basic visual stimulation.

    PubMed

    Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg

    2012-01-01

    Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  3. Spontaneous high-gamma band activity reflects functional organization of auditory cortex in the awake macaque.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2012-06-07

    In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here, we used chronic microelectrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions, we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Spontaneous high-gamma band activity reflects functional organization of auditory cortex in the awake macaque

    PubMed Central

    Fukushima, Makoto; Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.

    2012-01-01

    Summary In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here we used chronic micro-electrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. PMID:22681693

  5. Research and Studies Directory for Manpower, Personnel, and Training

    DTIC Science & Technology

    1989-05-01

    LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX TANGNEY J AIR FORCE OFFICE OF SCIENTIFIC RESEARCH 202-767-5021 A MODEL FOR...VISUAL ATTENTION AUDITORY PERCEPTION OF COMPLEX SOUNDS CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX EYE MOVEMENTS AND SPATIAL PATTERN VISION EYE

  6. Auditory and audio-vocal responses of single neurons in the monkey ventral premotor cortex.

    PubMed

    Hage, Steffen R

    2018-03-20

    Monkey vocalization is a complex behavioral pattern, which is flexibly used in audio-vocal communication. A recently proposed dual neural network model suggests that cognitive control might be involved in this behavior, originating from a frontal cortical network in the prefrontal cortex and mediated via projections from the rostral portion of the ventral premotor cortex (PMvr) and motor cortex to the primary vocal motor network in the brainstem. For the rapid adjustment of vocal output to external acoustic events, strong interconnections between vocal motor and auditory sites are needed, which are present at cortical and subcortical levels. However, the role of the PMvr in audio-vocal integration processes remains unclear. In the present study, single neurons in the PMvr were recorded in rhesus monkeys (Macaca mulatta) while volitionally producing vocalizations in a visual detection task or passively listening to monkey vocalizations. Ten percent of randomly selected neurons in the PMvr modulated their discharge rate in response to acoustic stimulation with species-specific calls. More than four-fifths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of the vocalization. Based on these audio-vocal interactions, the PMvr might be well positioned to mediate higher order auditory processing with cognitive control of the vocal motor output to the primary vocal motor network. Such audio-vocal integration processes in the premotor cortex might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Quantification of mid and late evoked sinks in laminar current source density profiles of columns in the primary auditory cortex

    PubMed Central

    Schaefer, Markus K.; Hechavarría, Julio C.; Kössl, Manfred

    2015-01-01

    Current source density (CSD) analysis assesses spatiotemporal synaptic activations at somatic and/or dendritic levels in the form of depolarizing current sinks. Whereas many studies have focused on the short (<50 ms) latency sinks, associated with thalamocortical projections, sinks with longer latencies have received less attention. Here, we analyzed laminar CSD patterns for the first 600 ms after stimulus onset in the primary auditory cortex of Mongolian gerbils. By applying an algorithm for contour calculation, three distinct mid and four late evoked sinks were identified in layers I, III, Va, VIa, and VIb. Our results further showed that the patterns of intracortical information-flow remained qualitatively similar for low and for high sound pressure level stimuli at the characteristic frequency (CF) as well as for stimuli ± 1 octave from CF. There were, however, differences associated with the strength, vertical extent, onset latency, and duration of the sinks for the four stimulation paradigms used. Stimuli one octave above the most sensitive frequency evoked a new, and quite reliable, sink in layer Va whereas low level stimulation led to the disappearance of the layer VIb sink. These data indicate the presence of input sources specifically activated in response to level and/or frequency parameters. Furthermore, spectral integration above vs. below the CF of neurons is asymmetric as illustrated by CSD profiles. These results are important because synaptic feedback associated with mid and late sinks—beginning at 50 ms post stimulus latency—is likely crucial for response modulation resulting from higher order processes like memory, learning or cognitive control. PMID:26557058

  8. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia.

    PubMed

    Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P

    2013-06-01

    In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.

  9. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia

    PubMed Central

    Barnes, Gareth R.; Penny, William D.; Iverson, Paul; Woodhead, Zoe V. J.; Griffiths, Timothy D.; Leff, Alexander P.

    2013-01-01

    In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics’ speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired. PMID:23715097

  10. The role of auditory cortex in retention of rhythmic patterns as studied in patients with temporal lobe removals including Heschl's gyrus.

    PubMed

    Penhune, V B; Zatorre, R J; Feindel, W H

    1999-03-01

    This experiment examined the participation of the auditory cortex of the temporal lobe in the perception and retention of rhythmic patterns. Four patient groups were tested on a paradigm contrasting reproduction of auditory and visual rhythms: those with right or left anterior temporal lobe removals which included Heschl's gyrus (HG), the region of primary auditory cortex (RT-A and LT-A); and patients with right or left anterior temporal lobe removals which did not include HG (RT-a and LT-a). Estimation of lesion extent in HG using an MRI-based probabilistic map indicated that, in the majority of subjects, the lesion was confined to the anterior secondary auditory cortex located on the anterior-lateral extent of HG. On the rhythm reproduction task, RT-A patients were impaired in retention of auditory but not visual rhythms, particularly when accurate reproduction of stimulus durations was required. In contrast, LT-A patients as well as both RT-a and LT-a patients were relatively unimpaired on this task. None of the patient groups was impaired in the ability to make an adequate motor response. Further, they were unimpaired when using a dichotomous response mode, indicating that they were able to adequately differentiate the stimulus durations and, when given an alternative method of encoding, to retain them. Taken together, these results point to a specific role for the right anterior secondary auditory cortex in the retention of a precise analogue representation of auditory tonal patterns.

  11. High resolution 1H NMR-based metabonomic study of the auditory cortex analogue of developing chick (Gallus gallus domesticus) following prenatal chronic loud music and noise exposure.

    PubMed

    Kumar, Vivek; Nag, Tapas Chandra; Sharma, Uma; Mewar, Sujeet; Jagannathan, Naranamangalam R; Wadhwa, Shashi

    2014-10-01

    Proper functional development of the auditory cortex (ACx) critically depends on early relevant sensory experiences. Exposure to high intensity noise (industrial/traffic) and music, a current public health concern, may disrupt the proper development of the ACx and associated behavior. The biochemical mechanisms associated with such activity dependent changes during development are poorly understood. Here we report the effects of prenatal chronic (last 10 days of incubation), 110dB sound pressure level (SPL) music and noise exposure on metabolic profile of the auditory cortex analogue/field L (AuL) in domestic chicks. Perchloric acid extracts of AuL of post hatch day 1 chicks from control, music and noise groups were subjected to high resolution (700MHz) (1)H NMR spectroscopy. Multivariate regression analysis of the concentration data of 18 metabolites revealed a significant class separation between control and loud sound exposed groups, indicating a metabolic perturbation. Comparison of absolute concentration of metabolites showed that overstimulation with loud sound, independent of spectral characteristics (music or noise) led to extensive usage of major energy metabolites, e.g., glucose, β-hydroxybutyrate and ATP. On the other hand, high glutamine levels and sustained levels of neuromodulators and alternate energy sources, e.g., creatine, ascorbate and lactate indicated a systems restorative measure in a condition of neuronal hyperactivity. At the same time, decreased aspartate and taurine levels in the noise group suggested a differential impact of prenatal chronic loud noise over music exposure. Thus prenatal exposure to loud sound especially noise alters the metabolic activity in the AuL which in turn can affect the functional development and later auditory associated behaviour. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. The effects of neck flexion on cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in related sensory cortices

    PubMed Central

    2012-01-01

    Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306

  13. Acute Inactivation of Primary Auditory Cortex Causes a Sound Localisation Deficit in Ferrets

    PubMed Central

    Wood, Katherine C.; Town, Stephen M.; Atilgan, Huriye; Jones, Gareth P.

    2017-01-01

    The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context. PMID:28099489

  14. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  15. Reduced variability of auditory alpha activity in chronic tinnitus.

    PubMed

    Schlee, Winfried; Schecklmann, Martin; Lehner, Astrid; Kreuzer, Peter M; Vielsmeier, Veronika; Poeppl, Timm B; Langguth, Berthold

    2014-01-01

    Subjective tinnitus is characterized by the conscious perception of a phantom sound which is usually more prominent under silence. Resting state recordings without any auditory stimulation demonstrated a decrease of cortical alpha activity in temporal areas of subjects with an ongoing tinnitus perception. This is often interpreted as an indicator for enhanced excitability of the auditory cortex in tinnitus. In this study we want to further investigate this effect by analysing the moment-to-moment variability of the alpha activity in temporal areas. Magnetoencephalographic resting state recordings of 21 tinnitus subjects and 21 healthy controls were analysed with respect to the mean and the variability of spectral power in the alpha frequency band over temporal areas. A significant decrease of auditory alpha activity was detected for the low alpha frequency band (8-10 Hz) but not for the upper alpha band (10-12 Hz). Furthermore, we found a significant decrease of alpha variability for the tinnitus group. This result was significant for the lower alpha frequency range and not significant for the upper alpha frequencies. Tinnitus subjects with a longer history of tinnitus showed less variability of their auditory alpha activity which might be an indicator for reduced adaptability of the auditory cortex in chronic tinnitus.

  16. Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey.

    PubMed

    Tian, Biao; Rauschecker, Josef P

    2004-11-01

    Single neurons were recorded from the lateral belt areas, anterolateral (AL), mediolateral (ML), and caudolateral (CL), of nonprimary auditory cortex in 4 adult rhesus monkeys under gas anesthesia, while the neurons were stimulated with frequency-modulated (FM) sweeps. Responses to FM sweeps, measured as the firing rate of the neurons, were invariably greater than those to tone bursts. In our stimuli, frequency changed linearly from low to high frequencies (FM direction "up") or high to low frequencies ("down") at varying speeds (FM rates). Neurons were highly selective to the rate and direction of the FM sweep. Significant differences were found between the 3 lateral belt areas with regard to their FM rate preferences: whereas neurons in ML responded to the whole range of FM rates, AL neurons responded better to slower FM rates in the range of naturally occurring communication sounds. CL neurons generally responded best to fast FM rates at a speed of several hundred Hz/ms, which have the broadest frequency spectrum. These selectivities are consistent with a role of AL in the decoding of communication sounds and of CL in the localization of sounds, which works best with broader bandwidths. Together, the results support the hypothesis of parallel streams for the processing of different aspects of sounds, including auditory objects and auditory space.

  17. An anatomical and functional topography of human auditory cortical areas

    PubMed Central

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions. PMID:25120426

  18. Mismatch Negativity in Recent-Onset and Chronic Schizophrenia: A Current Source Density Analysis

    PubMed Central

    Fulham, W. Ross; Michie, Patricia T.; Ward, Philip B.; Rasser, Paul E.; Todd, Juanita; Johnston, Patrick J.; Thompson, Paul M.; Schall, Ulrich

    2014-01-01

    Mismatch negativity (MMN) is a component of the event-related potential elicited by deviant auditory stimuli. It is presumed to index pre-attentive monitoring of changes in the auditory environment. MMN amplitude is smaller in groups of individuals with schizophrenia compared to healthy controls. We compared duration-deviant MMN in 16 recent-onset and 19 chronic schizophrenia patients versus age- and sex-matched controls. Reduced frontal MMN was found in both patient groups, involved reduced hemispheric asymmetry, and was correlated with Global Assessment of Functioning (GAF) and negative symptom ratings. A cortically-constrained LORETA analysis, incorporating anatomical data from each individual's MRI, was performed to generate a current source density model of the MMN response over time. This model suggested MMN generation within a temporal, parietal and frontal network, which was right hemisphere dominant only in controls. An exploratory analysis revealed reduced CSD in patients in superior and middle temporal cortex, inferior and superior parietal cortex, precuneus, anterior cingulate, and superior and middle frontal cortex. A region of interest (ROI) analysis was performed. For the early phase of the MMN, patients had reduced bilateral temporal and parietal response and no lateralisation in frontal ROIs. For late MMN, patients had reduced bilateral parietal response and no lateralisation in temporal ROIs. In patients, correlations revealed a link between GAF and the MMN response in parietal cortex. In controls, the frontal response onset was 17 ms later than the temporal and parietal response. In patients, onset latency of the MMN response was delayed in secondary, but not primary, auditory cortex. However amplitude reductions were observed in both primary and secondary auditory cortex. These latency delays may indicate relatively intact information processing upstream of the primary auditory cortex, but impaired primary auditory cortex or cortico-cortical or thalamo-cortical communication with higher auditory cortices as a core deficit in schizophrenia. PMID:24949859

  19. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  20. The harmonic organization of auditory cortex.

    PubMed

    Wang, Xiaoqin

    2013-12-17

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

  1. Narrow sound pressure level tuning in the auditory cortex of the bats Molossus molossus and Macrotus waterhousii.

    PubMed

    Macías, Silvio; Hechavarría, Julio C; Cobo, Ariadna; Mora, Emanuel C

    2014-03-01

    In the auditory system, tuning to sound level appears in the form of non-monotonic response-level functions that depict the response of a neuron to changing sound levels. Neurons with non-monotonic response-level functions respond best to a particular sound pressure level (defined as "best level" or level evoking the maximum response). We performed a comparative study on the location and basic functional organization of the auditory cortex in the gleaning bat, Macrotus waterhousii, and the aerial-hawking bat, Molossus molossus. Here, we describe the response-level function of cortical units in these two species. In the auditory cortices of M. waterhousii and M. molossus, the characteristic frequency of the units increased from caudal to rostral. In M. waterhousii, there was an even distribution of characteristic frequencies while in M. molossus there was an overrepresentation of frequencies present within echolocation pulses. In both species, most of the units showed best levels in a narrow range, without an evident topography in the amplitopic organization, as described in other species. During flight, bats decrease the intensity of their emitted pulses when they approach a prey item or an obstacle resulting in maintenance of perceived echo intensity. Narrow level tuning likely contributes to the extraction of echo amplitudes facilitating echo-intensity compensation. For aerial-hawking bats, like M. molossus, receiving echoes within the optimal sensitivity range can help the bats to sustain consistent analysis of successive echoes without distortions of perception caused by changes in amplitude. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Neural bases of rhythmic entrainment in humans: critical transformation between cortical and lower-level representations of auditory rhythm.

    PubMed

    Nozaradan, Sylvie; Schönwiesner, Marc; Keller, Peter E; Lenc, Tomas; Lehmann, Alexandre

    2018-02-01

    The spontaneous ability to entrain to meter periodicities is central to music perception and production across cultures. There is increasing evidence that this ability involves selective neural responses to meter-related frequencies. This phenomenon has been observed in the human auditory cortex, yet it could be the product of evolutionarily older lower-level properties of brainstem auditory neurons, as suggested by recent recordings from rodent midbrain. We addressed this question by taking advantage of a new method to simultaneously record human EEG activity originating from cortical and lower-level sources, in the form of slow (< 20 Hz) and fast (> 150 Hz) responses to auditory rhythms. Cortical responses showed increased amplitudes at meter-related frequencies compared to meter-unrelated frequencies, regardless of the prominence of the meter-related frequencies in the modulation spectrum of the rhythmic inputs. In contrast, frequency-following responses showed increased amplitudes at meter-related frequencies only in rhythms with prominent meter-related frequencies in the input but not for a more complex rhythm requiring more endogenous generation of the meter. This interaction with rhythm complexity suggests that the selective enhancement of meter-related frequencies does not fully rely on subcortical auditory properties, but is critically shaped at the cortical level, possibly through functional connections between the auditory cortex and other, movement-related, brain structures. This process of temporal selection would thus enable endogenous and motor entrainment to emerge with substantial flexibility and invariance with respect to the rhythmic input in humans in contrast with non-human animals. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Auditory dysfunction in schizophrenia: integrating clinical and basic features

    PubMed Central

    Javitt, Daniel C.; Sweet, Robert A.

    2015-01-01

    Schizophrenia is a complex neuropsychiatric disorder that is associated with persistent psychosocial disability in affected individuals. Although studies of schizophrenia have traditionally focused on deficits in higher-order processes such as working memory and executive function, there is an increasing realization that, in this disorder, deficits can be found throughout the cortex and are manifest even at the level of early sensory processing. These deficits are highly amenable to translational investigation and represent potential novel targets for clinical intervention. Deficits, moreover, have been linked to specific structural abnormalities in post-mortem auditory cortex tissue from individuals with schizophrenia, providing unique insights into underlying pathophysiological mechanisms. PMID:26289573

  4. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

    PubMed

    François, Clément; Schön, Daniele

    2014-02-01

    There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment.

    PubMed

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  6. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    PubMed Central

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus—tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice. PMID:22754508

  7. Changes of oscillatory activity in pitch processing network and related tinnitus relief induced by acoustic CR neuromodulation

    PubMed Central

    Adamchic, Ilya; Hauptmann, Christian; Tass, Peter A.

    2012-01-01

    Chronic subjective tinnitus is characterized by abnormal neuronal synchronization in the central auditory system. As shown in a controlled clinical trial, acoustic coordinated reset (CR) neuromodulation causes a significant relief of tinnitus symptoms along with a significant decrease of pathological oscillatory activity in a network comprising auditory and non-auditory brain areas, which is often accompanied with a significant tinnitus pitch change. Here we studied if the tinnitus pitch change correlates with a reduction of tinnitus loudness and/or annoyance as assessed by visual analog scale (VAS) scores. Furthermore, we studied if the changes of the pattern of brain synchrony in tinnitus patients induced by 12 weeks of CR therapy depend on whether or not the patients undergo a pronounced tinnitus pitch change. Therefore, we applied standardized low-resolution brain electromagnetic tomography (sLORETA) to EEG recordings from two groups of patients with a sustained CR-induced relief of tinnitus symptoms with and without tinnitus pitch change. We found that absolute changes of VAS loudness and VAS annoyance scores significantly correlate with the modulus, i.e., the absolute value, of the tinnitus pitch change. Moreover, as opposed to patients with small or no pitch change we found a significantly stronger decrease in gamma power in patients with pronounced tinnitus pitch change in right parietal cortex (Brodmann area, BA 40), right frontal cortex (BA 9, 46), left temporal cortex (BA 22, 42), and left frontal cortex (BA 4, 6), combined with a significantly stronger increase of alpha (10–12 Hz) activity in the right and left anterior cingulate cortex (ACC; BA 32, 24). In addition, we revealed a significantly lower functional connectivity in the gamma band between the right dorsolateral prefrontal cortex (BA 46) and the right ACC (BA 32) after 12 weeks of CR therapy in patients with pronounced pitch change. Our results indicate a substantial, CR-induced reduction of tinnitus-related auditory binding in a pitch processing network. PMID:22493570

  8. Induction of plasticity in the human motor cortex by pairing an auditory stimulus with TMS.

    PubMed

    Sowman, Paul F; Dueholm, Søren S; Rasmussen, Jesper H; Mrachacz-Kersting, Natalie

    2014-01-01

    Acoustic stimuli can cause a transient increase in the excitability of the motor cortex. The current study leverages this phenomenon to develop a method for testing the integrity of auditorimotor integration and the capacity for auditorimotor plasticity. We demonstrate that appropriately timed transcranial magnetic stimulation (TMS) of the hand area, paired with auditorily mediated excitation of the motor cortex, induces an enhancement of motor cortex excitability that lasts beyond the time of stimulation. This result demonstrates for the first time that paired associative stimulation (PAS)-induced plasticity within the motor cortex is applicable with auditory stimuli. We propose that the method developed here might provide a useful tool for future studies that measure auditory-motor connectivity in communication disorders.

  9. Functional correlates of the anterolateral processing hierarchy in human auditory cortex.

    PubMed

    Chevillet, Mark; Riesenhuber, Maximilian; Rauschecker, Josef P

    2011-06-22

    Converging evidence supports the hypothesis that an anterolateral processing pathway mediates sound identification in auditory cortex, analogous to the role of the ventral cortical pathway in visual object recognition. Studies in nonhuman primates have characterized the anterolateral auditory pathway as a processing hierarchy, composed of three anatomically and physiologically distinct initial stages: core, belt, and parabelt. In humans, potential homologs of these regions have been identified anatomically, but reliable and complete functional distinctions between them have yet to be established. Because the anatomical locations of these fields vary across subjects, investigations of potential homologs between monkeys and humans require these fields to be defined in single subjects. Using functional MRI, we presented three classes of sounds (tones, band-passed noise bursts, and conspecific vocalizations), equivalent to those used in previous monkey studies. In each individual subject, three regions showing functional similarities to macaque core, belt, and parabelt were readily identified. Furthermore, the relative sizes and locations of these regions were consistent with those reported in human anatomical studies. Our results demonstrate that the functional organization of the anterolateral processing pathway in humans is largely consistent with that of nonhuman primates. Because our scanning sessions last only 15 min/subject, they can be run in conjunction with other scans. This will enable future studies to characterize functional modules in human auditory cortex at a level of detail previously possible only in visual cortex. Furthermore, the approach of using identical schemes in both humans and monkeys will aid with establishing potential homologies between them.

  10. Interdependent encoding of pitch, timbre and spatial location in auditory cortex

    PubMed Central

    Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.

    2009-01-01

    Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960

  11. Auditory-Motor Processing of Speech Sounds

    PubMed Central

    Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.

    2013-01-01

    The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846

  12. Activity in the left auditory cortex is associated with individual impulsivity in time discounting.

    PubMed

    Han, Ruokang; Takahashi, Taiki; Miyazaki, Akane; Kadoya, Tomoka; Kato, Shinya; Yokosawa, Koichi

    2015-01-01

    Impulsivity dictates individual decision-making behavior. Therefore, it can reflect consumption behavior and risk of addiction and thus underlies social activities as well. Neuroscience has been applied to explain social activities; however, the brain function controlling impulsivity has remained unclear. It is known that impulsivity is related to individual time perception, i.e., a person who perceives a certain physical time as being longer is impulsive. Here we show that activity of the left auditory cortex is related to individual impulsivity. Individual impulsivity was evaluated by a self-answered questionnaire in twelve healthy right-handed adults, and activities of the auditory cortices of bilateral hemispheres when listening to continuous tones were recorded by magnetoencephalography. Sustained activity of the left auditory cortex was significantly correlated to impulsivity, that is, larger sustained activity indicated stronger impulsivity. The results suggest that the left auditory cortex represent time perception, probably because the area is involved in speech perception, and that it represents impulsivity indirectly.

  13. “When Music Speaks”: Auditory Cortex Morphology as a Neuroanatomical Marker of Language Aptitude and Musicality

    PubMed Central

    Turker, Sabrina; Reiterer, Susanne M.; Seither-Preisler, Annemarie; Schneider, Peter

    2017-01-01

    Recent research has shown that the morphology of certain brain regions may indeed correlate with a number of cognitive skills such as musicality or language ability. The main aim of the present study was to explore the extent to which foreign language aptitude, in particular phonetic coding ability, is influenced by the morphology of Heschl’s gyrus (HG; auditory cortex), working memory capacity, and musical ability. In this study, the auditory cortices of German-speaking individuals (N = 30; 13 males/17 females; aged 20–40 years) with high and low scores in a number of language aptitude tests were compared. The subjects’ language aptitude was measured by three different tests, namely a Hindi speech imitation task (phonetic coding ability), an English pronunciation assessment, and the Modern Language Aptitude Test (MLAT). Furthermore, working memory capacity and musical ability were assessed to reveal their relationship with foreign language aptitude. On the behavioral level, significant correlations were found between phonetic coding ability, English pronunciation skills, musical experience, and language aptitude as measured by the MLAT. Parts of all three tests measuring language aptitude correlated positively and significantly with each other, supporting their validity for measuring components of language aptitude. Remarkably, the number of instruments played by subjects showed significant correlations with all language aptitude measures and musicality, whereas, the number of foreign languages did not show any correlations. With regard to the neuroanatomy of auditory cortex, adults with very high scores in the Hindi testing and the musicality test (AMMA) demonstrated a clear predominance of complete posterior HG duplications in the right hemisphere. This may reignite the discussion of the importance of the right hemisphere for language processing, especially when linked or common resources are involved, such as the inter-dependency between phonetic and musical aptitude. PMID:29250017

  14. New perspectives on the auditory cortex: learning and memory.

    PubMed

    Weinberger, Norman M

    2015-01-01

    Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.

  15. Local and Global Spatial Organization of Interaural Level Difference and Frequency Preferences in Auditory Cortex

    PubMed Central

    Panniello, Mariangela; King, Andrew J; Dahmen, Johannes C; Walker, Kerry M M

    2018-01-01

    Abstract Despite decades of microelectrode recordings, fundamental questions remain about how auditory cortex represents sound-source location. Here, we used in vivo 2-photon calcium imaging to measure the sensitivity of layer II/III neurons in mouse primary auditory cortex (A1) to interaural level differences (ILDs), the principal spatial cue in this species. Although most ILD-sensitive neurons preferred ILDs favoring the contralateral ear, neurons with either midline or ipsilateral preferences were also present. An opponent-channel decoder accurately classified ILDs using the difference in responses between populations of neurons that preferred contralateral-ear-greater and ipsilateral-ear-greater stimuli. We also examined the spatial organization of binaural tuning properties across the imaged neurons with unprecedented resolution. Neurons driven exclusively by contralateral ear stimuli or by binaural stimulation occasionally formed local clusters, but their binaural categories and ILD preferences were not spatially organized on a more global scale. In contrast, the sound frequency preferences of most neurons within local cortical regions fell within a restricted frequency range, and a tonotopic gradient was observed across the cortical surface of individual mice. These results indicate that the representation of ILDs in mouse A1 is comparable to that of most other mammalian species, and appears to lack systematic or consistent spatial order. PMID:29136122

  16. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    PubMed Central

    Kotak, Vibhakar C.; Mowery, Todd M.; Sanes, Dan H.

    2015-01-01

    The representation of acoustic cues involves regions downstream from the auditory cortex (ACx). One such area, the perirhinal cortex (PRh), processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG) and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of type A gamma-aminobutyric acid (GABA-A) receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the ACx. PMID:26321918

  17. Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes.

    PubMed

    Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian

    2018-04-18

    Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Frontal Cortex Activation Causes Rapid Plasticity of Auditory Cortical Processing

    PubMed Central

    Winkowski, Daniel E.; Bandyopadhyay, Sharba; Shamma, Shihab A.

    2013-01-01

    Neurons in the primary auditory cortex (A1) can show rapid changes in receptive fields when animals are engaged in sound detection and discrimination tasks. The source of a signal to A1 that triggers these changes is suspected to be in frontal cortical areas. How or whether activity in frontal areas can influence activity and sensory processing in A1 and the detailed changes occurring in A1 on the level of single neurons and in neuronal populations remain uncertain. Using electrophysiological techniques in mice, we found that pairing orbitofrontal cortex (OFC) stimulation with sound stimuli caused rapid changes in the sound-driven activity within A1 that are largely mediated by noncholinergic mechanisms. By integrating in vivo two-photon Ca2+ imaging of A1 with OFC stimulation, we found that pairing OFC activity with sounds caused dynamic and selective changes in sensory responses of neural populations in A1. Further, analysis of changes in signal and noise correlation after OFC pairing revealed improvement in neural population-based discrimination performance within A1. This improvement was frequency specific and dependent on correlation changes. These OFC-induced influences on auditory responses resemble behavior-induced influences on auditory responses and demonstrate that OFC activity could underlie the coordination of rapid, dynamic changes in A1 to dynamic sensory environments. PMID:24227723

  19. Direct recordings from the auditory cortex in a cochlear implant user.

    PubMed

    Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A

    2013-06-01

    Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings.

  20. Auditory cortex stimulation to suppress tinnitus: mechanisms and strategies.

    PubMed

    Zhang, Jinsheng

    2013-01-01

    Brain stimulation is an important method used to modulate neural activity and suppress tinnitus. Several auditory and non-auditory brain regions have been targeted for stimulation. This paper reviews recent progress on auditory cortex (AC) stimulation to suppress tinnitus and its underlying neural mechanisms and stimulation strategies. At the same time, the author provides his opinions and hypotheses on both animal and human models. The author also proposes a medial geniculate body (MGB)-thalamic reticular nucleus (TRN)-Gating mechanism to reflect tinnitus-related neural information coming from upstream and downstream projection structures. The upstream structures include the lower auditory brainstem and midbrain structures. The downstream structures include the AC and certain limbic centers. Both upstream and downstream information is involved in a dynamic gating mechanism in the MGB together with the TRN. When abnormal gating occurs at the thalamic level, the spilled-out information interacts with the AC to generate tinnitus. The tinnitus signals at the MGB-TRN-Gating may be modulated by different forms of stimulations including brain stimulation. Each stimulation acts as a gain modulator to control the level of tinnitus signals at the MGB-TRN-Gate. This hypothesis may explain why different types of stimulation can induce tinnitus suppression. Depending on the tinnitus etiology, MGB-TRN-Gating may be different in levels and dynamics, which cause variability in tinnitus suppression induced by different gain controllers. This may explain why the induced suppression of tinnitus by one type of stimulation varies across individual patients. Copyright © 2012. Published by Elsevier B.V.

  1. The auditory representation of speech sounds in human motor cortex

    PubMed Central

    Cheung, Connie; Hamilton, Liberty S; Johnson, Keith; Chang, Edward F

    2016-01-01

    In humans, listening to speech evokes neural responses in the motor cortex. This has been controversially interpreted as evidence that speech sounds are processed as articulatory gestures. However, it is unclear what information is actually encoded by such neural activity. We used high-density direct human cortical recordings while participants spoke and listened to speech sounds. Motor cortex neural patterns during listening were substantially different than during articulation of the same sounds. During listening, we observed neural activity in the superior and inferior regions of ventral motor cortex. During speaking, responses were distributed throughout somatotopic representations of speech articulators in motor cortex. The structure of responses in motor cortex during listening was organized along acoustic features similar to auditory cortex, rather than along articulatory features as during speaking. Motor cortex does not contain articulatory representations of perceived actions in speech, but rather, represents auditory vocal information. DOI: http://dx.doi.org/10.7554/eLife.12577.001 PMID:26943778

  2. Stereotactically-guided Ablation of the Rat Auditory Cortex, and Localization of the Lesion in the Brain.

    PubMed

    Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A

    2017-10-11

    The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.

  3. Deep brain stimulation of the ventral hippocampus restores deficits in processing of auditory evoked potentials in a rodent developmental disruption model of schizophrenia.

    PubMed

    Ewing, Samuel G; Grace, Anthony A

    2013-02-01

    Existing antipsychotic drugs are most effective at treating the positive symptoms of schizophrenia but their relative efficacy is low and they are associated with considerable side effects. In this study deep brain stimulation of the ventral hippocampus was performed in a rodent model of schizophrenia (MAM-E17) in an attempt to alleviate one set of neurophysiological alterations observed in this disorder. Bipolar stimulating electrodes were fabricated and implanted, bilaterally, into the ventral hippocampus of rats. High frequency stimulation was delivered bilaterally via a custom-made stimulation device and both spectral analysis (power and coherence) of resting state local field potentials and amplitude of auditory evoked potential components during a standard inhibitory gating paradigm were examined. MAM rats exhibited alterations in specific components of the auditory evoked potential in the infralimbic cortex, the core of the nucleus accumbens, mediodorsal thalamic nucleus, and ventral hippocampus in the left hemisphere only. DBS was effective in reversing these evoked deficits in the infralimbic cortex and the mediodorsal thalamic nucleus of MAM-treated rats to levels similar to those observed in control animals. In contrast stimulation did not alter evoked potentials in control rats. No deficits or stimulation-induced alterations were observed in the prelimbic and orbitofrontal cortices, the shell of the nucleus accumbens or ventral tegmental area. These data indicate a normalization of deficits in generating auditory evoked potentials induced by a developmental disruption by acute high frequency, electrical stimulation of the ventral hippocampus. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Deep brain stimulation of the ventral hippocampus restores deficits in processing of auditory evoked potentials in a rodent developmental disruption model of schizophrenia

    PubMed Central

    Ewing, Samuel G.; Grace, Anthony A.

    2012-01-01

    Existing antipsychotic drugs are most effective at treating the positive symptoms of schizophrenia, but their relative efficacy is low and they are associated with considerable side effects. In this study deep brain stimulation of the ventral hippocampus was performed in a rodent model of schizophrenia (MAM-E17) in an attempt to alleviate one set of neurophysiological alterations observed in this disorder. Bipolar stimulating electrodes were fabricated and implanted, bilaterally, into the ventral hippocampus of rats. High frequency stimulation was delivered bilaterally via a custom-made stimulation device and both spectral analysis (power and coherence) of resting state local field potentials and amplitude of auditory evoked potential components during a standard inhibitory gating paradigm were examined. MAM rats exhibited alterations in specific components of the auditory evoked potential in the infralimbic cortex, the core of the nucleus accumbens, mediodorsal thalamic nucleus, and ventral hippocampus in the left hemisphere only. DBS was effective in reversing these evoked deficits in the infralimbic cortex and the mediodorsal thalamic nucleus of MAM-treated rats to levels similar to those observed in control animals. In contrast stimulation did not alter evoked potentials in control rats. No deficits or stimulation-induced alterations were observed in the prelimbic and orbitofrontal cortices, the shell of the nucleus accumbens or ventral tegmental area. These data indicate a normalization of deficits in generating auditory evoked potentials induced by a developmental disruption by acute high frequency, electrical stimulation of the ventral hippocampus. PMID:23269227

  5. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    PubMed

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  6. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation

    PubMed Central

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J.; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M.; Lenarz, Thomas; Lim, Hubert H.

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus. PMID:26046763

  7. A train of electrical pulses applied to the primary auditory cortex evokes a conditioned response in guinea pigs.

    PubMed

    Okuda, Yuji; Shikata, Hiroshi; Song, Wen-Jie

    2011-09-01

    As a step to develop auditory prosthesis by cortical stimulation, we tested whether a single train of pulses applied to the primary auditory cortex could elicit classically conditioned behavior in guinea pigs. Animals were trained using a tone as the conditioned stimulus and an electrical shock to the right eyelid as the unconditioned stimulus. After conditioning, a train of 11 pulses applied to the left AI induced the conditioned eye-blink response. Cortical stimulation induced no response after extinction. Our results support the feasibility of auditory prosthesis by electrical stimulation of the cortex. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  8. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    PubMed

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Cortical reorganization in postlingually deaf cochlear implant users: Intra-modal and cross-modal considerations.

    PubMed

    Stropahl, Maren; Chen, Ling-Chia; Debener, Stefan

    2017-01-01

    With the advances of cochlear implant (CI) technology, many deaf individuals can partially regain their hearing ability. However, there is a large variation in the level of recovery. Cortical changes induced by hearing deprivation and restoration with CIs have been thought to contribute to this variation. The current review aims to identify these cortical changes in postlingually deaf CI users and discusses their maladaptive or adaptive relationship to the CI outcome. Overall, intra-modal and cross-modal reorganization patterns have been identified in postlingually deaf CI users in visual and in auditory cortex. Even though cross-modal activation in auditory cortex is considered as maladaptive for speech recovery in CI users, a similar activation relates positively to lip reading skills. Furthermore, cross-modal activation of the visual cortex seems to be adaptive for speech recognition. Currently available evidence points to an involvement of further brain areas and suggests that a focus on the reversal of visual take-over of the auditory cortex may be too limited. Future investigations should consider expanded cortical as well as multi-sensory processing and capture different hierarchical processing steps. Furthermore, prospective longitudinal designs are needed to track the dynamics of cortical plasticity that takes place before and after implantation. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Stimulus-specific suppression preserves information in auditory short-term memory.

    PubMed

    Linke, Annika C; Vicente-Grabovetsky, Alejandro; Cusack, Rhodri

    2011-08-02

    Philosophers and scientists have puzzled for millennia over how perceptual information is stored in short-term memory. Some have suggested that early sensory representations are involved, but their precise role has remained unclear. The current study asks whether auditory cortex shows sustained frequency-specific activation while sounds are maintained in short-term memory using high-resolution functional MRI (fMRI). Investigating short-term memory representations within regions of human auditory cortex with fMRI has been difficult because of their small size and high anatomical variability between subjects. However, we overcame these constraints by using multivoxel pattern analysis. It clearly revealed frequency-specific activity during the encoding phase of a change detection task, and the degree of this frequency-specific activation was positively related to performance in the task. Although the sounds had to be maintained in memory, activity in auditory cortex was significantly suppressed. Strikingly, patterns of activity in this maintenance period correlated negatively with the patterns evoked by the same frequencies during encoding. Furthermore, individuals who used a rehearsal strategy to remember the sounds showed reduced frequency-specific suppression during the maintenance period. Although negative activations are often disregarded in fMRI research, our findings imply that decreases in blood oxygenation level-dependent response carry important stimulus-specific information and can be related to cognitive processes. We hypothesize that, during auditory change detection, frequency-specific suppression protects short-term memory representations from being overwritten by inhibiting the encoding of interfering sounds.

  11. The cortical language circuit: from auditory perception to sentence comprehension.

    PubMed

    Friederici, Angela D

    2012-05-01

    Over the years, a large body of work on the brain basis of language comprehension has accumulated, paving the way for the formulation of a comprehensive model. The model proposed here describes the functional neuroanatomy of the different processing steps from auditory perception to comprehension as located in different gray matter brain regions. It also specifies the information flow between these regions, taking into account white matter fiber tract connections. Bottom-up, input-driven processes proceeding from the auditory cortex to the anterior superior temporal cortex and from there to the prefrontal cortex, as well as top-down, controlled and predictive processes from the prefrontal cortex back to the temporal cortex are proposed to constitute the cortical language circuit. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Functionally Specific Oscillatory Activity Correlates between Visual and Auditory Cortex in the Blind

    ERIC Educational Resources Information Center

    Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.

    2012-01-01

    Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…

  13. Stimulus Expectancy Modulates Inferior Frontal Gyrus and Premotor Cortex Activity in Auditory Perception

    ERIC Educational Resources Information Center

    Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten

    2012-01-01

    In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as…

  14. The harmonic organization of auditory cortex

    PubMed Central

    Wang, Xiaoqin

    2013-01-01

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544

  15. Reduced event-related current density in the anterior cingulate cortex in schizophrenia.

    PubMed

    Mulert, C; Gallinat, J; Pascual-Marqui, R; Dorn, H; Frick, K; Schlattmann, P; Mientus, S; Herrmann, W M; Winterer, G

    2001-04-01

    There is good evidence from neuroanatomic postmortem and functional imaging studies that dysfunction of the anterior cingulate cortex plays a prominent role in the pathophysiology of schizophrenia. So far, no electrophysiological localization study has been performed to investigate this deficit. We investigated 18 drug-free schizophrenic patients and 25 normal subjects with an auditory choice reaction task and measured event-related activity with 19 electrodes. Estimation of the current source density distribution in Talairach space was performed with low-resolution electromagnetic tomography (LORETA). In normals, we could differentiate between an early event-related potential peak of the N1 (90-100 ms) and a later N1 peak (120-130 ms). Subsequent current-density LORETA analysis in Talairach space showed increased activity in the auditory cortex area during the first N1 peak and increased activity in the anterior cingulate gyrus during the second N1 peak. No activation difference was observed in the auditory cortex between normals and patients with schizophrenia. However, schizophrenics showed significantly less anterior cingulate gyrus activation and slowed reaction times. Our results confirm previous findings of an electrical source in the anterior cingulate and an anterior cingulate dysfunction in schizophrenics. Our data also suggest that anterior cingulate function in schizophrenics is disturbed at a relatively early time point in the information-processing stream (100-140 ms poststimulus). Copyright 2001 Academic Press.

  16. Analyzing pitch chroma and pitch height in the human brain.

    PubMed

    Warren, Jason D; Uppenkamp, Stefan; Patterson, Roy D; Griffiths, Timothy D

    2003-11-01

    The perceptual pitch dimensions of chroma and height have distinct representations in the human brain: chroma is represented in cortical areas anterior to primary auditory cortex, whereas height is represented posterior to primary auditory cortex.

  17. Network and external perturbation induce burst synchronisation in cat cerebral cortex

    NASA Astrophysics Data System (ADS)

    Lameu, Ewandson L.; Borges, Fernando S.; Borges, Rafael R.; Batista, Antonio M.; Baptista, Murilo S.; Viana, Ricardo L.

    2016-05-01

    The brain of mammals are divided into different cortical areas that are anatomically connected forming larger networks which perform cognitive tasks. The cat cerebral cortex is composed of 65 areas organised into the visual, auditory, somatosensory-motor and frontolimbic cognitive regions. We have built a network of networks, in which networks are connected among themselves according to the connections observed in the cat cortical areas aiming to study how inputs drive the synchronous behaviour in this cat brain-like network. We show that without external perturbations it is possible to observe high level of bursting synchronisation between neurons within almost all areas, except for the auditory area. Bursting synchronisation appears between neurons in the auditory region when an external perturbation is applied in another cognitive area. This is a clear evidence that burst synchronisation and collective behaviour in the brain might be a process mediated by other brain areas under stimulation.

  18. Auditory and visual cortex of primates: a comparison of two sensory systems

    PubMed Central

    Rauschecker, Josef P.

    2014-01-01

    A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177

  19. Evidence for pitch chroma mapping in human auditory cortex.

    PubMed

    Briley, Paul M; Breakey, Charlotte; Krumbholz, Katrin

    2013-11-01

    Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, "pitch height," and a cyclical, "pitch chroma," dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex.

  20. Evidence for Pitch Chroma Mapping in Human Auditory Cortex

    PubMed Central

    Briley, Paul M.; Breakey, Charlotte; Krumbholz, Katrin

    2013-01-01

    Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, “pitch height,” and a cyclical, “pitch chroma,” dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex. PMID:22918980

  1. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    PubMed

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Emergence of neural encoding of auditory objects while listening to competing speakers

    PubMed Central

    Ding, Nai; Simon, Jonathan Z.

    2012-01-01

    A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation. PMID:22753470

  3. Bidirectional Regulation of Innate and Learned Behaviors That Rely on Frequency Discrimination by Cortical Inhibitory Neurons

    PubMed Central

    Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.

    2015-01-01

    The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746

  4. Directional connectivity of resting state human fMRI data using cascaded ICA-PDC analysis.

    PubMed

    Silfverhuth, Minna J; Remes, Jukka; Starck, Tuomo; Nikkinen, Juha; Veijola, Juha; Tervonen, Osmo; Kiviniemi, Vesa

    2011-11-01

    Directional connectivity measures, such as partial directed coherence (PDC), give us means to explore effective connectivity in the human brain. By utilizing independent component analysis (ICA), the original data-set reduction was performed for further PDC analysis. To test this cascaded ICA-PDC approach in causality studies of human functional magnetic resonance imaging (fMRI) data. Resting state group data was imaged from 55 subjects using a 1.5 T scanner (TR 1800 ms, 250 volumes). Temporal concatenation group ICA in a probabilistic ICA and further repeatability runs (n = 200) were overtaken. The reduced data-set included the time series presentation of the following nine ICA components: secondary somatosensory cortex, inferior temporal gyrus, intracalcarine cortex, primary auditory cortex, amygdala, putamen and the frontal medial cortex, posterior cingulate cortex and precuneus, comprising the default mode network components. Re-normalized PDC (rPDC) values were computed to determine directional connectivity at the group level at each frequency. The integrative role was suggested for precuneus while the role of major divergence region may be proposed to primary auditory cortex and amygdala. This study demonstrates the potential of the cascaded ICA-PDC approach in directional connectivity studies of human fMRI.

  5. The added value of auditory cortex transcranial random noise stimulation (tRNS) after bifrontal transcranial direct current stimulation (tDCS) for tinnitus.

    PubMed

    To, Wing Ting; Ost, Jan; Hart, John; De Ridder, Dirk; Vanneste, Sven

    2017-01-01

    Tinnitus is the perception of a sound in the absence of a corresponding external sound source. Research has suggested that functional abnormalities in tinnitus patients involve auditory as well as non-auditory brain areas. Transcranial electrical stimulation (tES), such as transcranial direct current stimulation (tDCS) to the dorsolateral prefrontal cortex and transcranial random noise stimulation (tRNS) to the auditory cortex, has demonstrated modulation of brain activity to transiently suppress tinnitus symptoms. Targeting two core regions of the tinnitus network by tES might establish a promising strategy to enhance treatment effects. This proof-of-concept study aims to investigate the effect of a multisite tES treatment protocol on tinnitus intensity and distress. A total of 40 tinnitus patients were enrolled in this study and received either bifrontal tDCS or the multisite treatment of bifrontal tDCS before bilateral auditory cortex tRNS. Both groups were treated on eight sessions (two times a week for 4 weeks). Our results show that a multisite treatment protocol resulted in more pronounced effects when compared with the bifrontal tDCS protocol or the waiting list group, suggesting an added value of auditory cortex tRNS to the bifrontal tDCS protocol for tinnitus patients. These findings support the involvement of the auditory as well as non-auditory brain areas in the pathophysiology of tinnitus and demonstrate the idea of the efficacy of network stimulation in the treatment of neurological disorders. This multisite tES treatment protocol proved to be save and feasible for clinical routine in tinnitus patients.

  6. Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction.

    PubMed

    Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker

    2016-01-01

    Whether cognitive load-and other aspects of task difficulty-increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information-which decreases distractibility-as a side effect of the increased activity in a focused-attention network.

  7. Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction

    PubMed Central

    Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker

    2016-01-01

    Whether cognitive load—and other aspects of task difficulty—increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information—which decreases distractibility—as a side effect of the increased activity in a focused-attention network. PMID:27242485

  8. Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise

    PubMed Central

    Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354

  9. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    PubMed

    Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  10. Neural Correlates of the Lombard Effect in Primate Auditory Cortex

    PubMed Central

    Eliades, Steven J.

    2012-01-01

    Speaking is a sensory-motor process that involves constant self-monitoring to ensure accurate vocal production. Self-monitoring of vocal feedback allows rapid adjustment to correct perceived differences between intended and produced vocalizations. One important behavior in vocal feedback control is a compensatory increase in vocal intensity in response to noise masking during vocal production, commonly referred to as the Lombard effect. This behavior requires mechanisms for continuously monitoring auditory feedback during speaking. However, the underlying neural mechanisms are poorly understood. Here we show that when marmoset monkeys vocalize in the presence of masking noise that disrupts vocal feedback, the compensatory increase in vocal intensity is accompanied by a shift in auditory cortex activity toward neural response patterns seen during vocalizations under normal feedback condition. Furthermore, we show that neural activity in auditory cortex during a vocalization phrase predicts vocal intensity compensation in subsequent phrases. These observations demonstrate that the auditory cortex participates in self-monitoring during the Lombard effect, and may play a role in the compensation of noise masking during feedback-mediated vocal control. PMID:22855821

  11. Harmonic template neurons in primate auditory cortex underlying complex sound processing

    PubMed Central

    Feng, Lei

    2017-01-01

    Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music. PMID:28096341

  12. Integrating Information from Different Senses in the Auditory Cortex

    PubMed Central

    King, Andrew J.; Walker, Kerry M.M.

    2015-01-01

    Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035

  13. Comparison between the analysis of the loudness dependency of the auditory N1/P2 component with LORETA and dipole source analysis in the prediction of treatment response to the selective serotonin reuptake inhibitor citalopram in major depression.

    PubMed

    Mulert, C; Juckel, G; Augustin, H; Hegerl, U

    2002-10-01

    The loudness dependency of the auditory evoked potentials (LDAEP) is used as an indicator of the central serotonergic system and predicts clinical response to serotonin agonists. So far, LDAEP has been typically investigated with dipole source analysis, because with this method the primary and secondary auditory cortex (with a high versus low serotonergic innervation) can be separated at least in parts. We have developed a new analysis procedure that uses an MRI probabilistic map of the primary auditory cortex in Talairach space and analyzed the current density in this region of interest with low resolution electromagnetic tomography (LORETA). LORETA is a tomographic localization method that calculates the current density distribution in Talairach space. In a group of patients with major depression (n=15), this new method can predict the response to an selective serotonin reuptake inhibitor (citalopram) at least to the same degree than the traditional dipole source analysis method (P=0.019 vs. P=0.028). The correlation of the improvement in the Hamilton Scale is significant with the LORETA-LDAEP-values (0.56; P=0.031) but not with the dipole source analysis LDAEP-values (0.43; P=0.11). The new tomographic LDAEP analysis is a promising tool in the analysis of the central serotonergic system.

  14. Changes of the directional brain networks related with brain plasticity in patients with long-term unilateral sensorineural hearing loss.

    PubMed

    Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J

    2016-01-28

    Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Sound envelope processing in the developing human brain: A MEG study.

    PubMed

    Tang, Huizhen; Brock, Jon; Johnson, Blake W

    2016-02-01

    This study investigated auditory cortical processing of linguistically-relevant temporal modulations in the developing brains of young children. Auditory envelope following responses to white noise amplitude modulated at rates of 1-80 Hz in healthy children (aged 3-5 years) and adults were recorded using a paediatric magnetoencephalography (MEG) system and a conventional MEG system, respectively. For children, there were envelope following responses to slow modulations but no significant responses to rates higher than about 25 Hz, whereas adults showed significant envelope following responses to almost the entire range of stimulus rates. Our results show that the auditory cortex of preschool-aged children has a sharply limited capacity to process rapid amplitude modulations in sounds, as compared to the auditory cortex of adults. These neurophysiological results are consistent with previous psychophysical evidence for a protracted maturational time course for auditory temporal processing. The findings are also in good agreement with current linguistic theories that posit a perceptual bias for low frequency temporal information in speech during language acquisition. These insights also have clinical relevance for our understanding of language disorders that are associated with difficulties in processing temporal information in speech. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  16. The neural basis of visual dominance in the context of audio-visual object processing.

    PubMed

    Schmid, Carmen; Büchel, Christian; Rose, Michael

    2011-03-01

    Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. The topography of frequency and time representation in primate auditory cortices

    PubMed Central

    Baumann, Simon; Joly, Olivier; Rees, Adrian; Petkov, Christopher I; Sun, Li; Thiele, Alexander; Griffiths, Timothy D

    2015-01-01

    Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area. DOI: http://dx.doi.org/10.7554/eLife.03256.001 PMID:25590651

  18. Thalamic input to auditory cortex is locally heterogeneous but globally tonotopic

    PubMed Central

    Vasquez-Lopez, Sebastian A; Weissenberger, Yves; Lohse, Michael; Keating, Peter; King, Andrew J

    2017-01-01

    Topographic representation of the receptor surface is a fundamental feature of sensory cortical organization. This is imparted by the thalamus, which relays information from the periphery to the cortex. To better understand the rules governing thalamocortical connectivity and the origin of cortical maps, we used in vivo two-photon calcium imaging to characterize the properties of thalamic axons innervating different layers of mouse auditory cortex. Although tonotopically organized at a global level, we found that the frequency selectivity of individual thalamocortical axons is surprisingly heterogeneous, even in layers 3b/4 of the primary cortical areas, where the thalamic input is dominated by the lemniscal projection. We also show that thalamocortical input to layer 1 includes collaterals from axons innervating layers 3b/4 and is largely in register with the main input targeting those layers. Such locally varied thalamocortical projections may be useful in enabling rapid contextual modulation of cortical frequency representations. PMID:28891466

  19. Speech target modulates speaking induced suppression in auditory cortex

    PubMed Central

    Ventura, Maria I; Nagarajan, Srikantan S; Houde, John F

    2009-01-01

    Background Previous magnetoencephalography (MEG) studies have demonstrated speaking-induced suppression (SIS) in the auditory cortex during vocalization tasks wherein the M100 response to a subject's own speaking is reduced compared to the response when they hear playback of their speech. Results The present MEG study investigated the effects of utterance rapidity and complexity on SIS: The greatest difference between speak and listen M100 amplitudes (i.e., most SIS) was found in the simple speech task. As the utterances became more rapid and complex, SIS was significantly reduced (p = 0.0003). Conclusion These findings are highly consistent with our model of how auditory feedback is processed during speaking, where incoming feedback is compared with an efference-copy derived prediction of expected feedback. Thus, the results provide further insights about how speech motor output is controlled, as well as the computational role of auditory cortex in transforming auditory feedback. PMID:19523234

  20. Feasibility of and Design Parameters for a Computer-Based Attitudinal Research Information System

    DTIC Science & Technology

    1975-08-01

    Auditory Displays Auditory Evoked Potentials Auditory Feedback Auditory Hallucinations Auditory Localization Auditory Maski ng Auditory Neurons...surprising to hear these prob- lems e:qpressed once again and in the same old refrain. The Navy attitude surveyors were frustrated when they...Audiolcgy Audiometers Aud iometry Audiotapes Audiovisual Communications Media Audiovisual Instruction Auditory Cortex Auditory

  1. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    PubMed

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  2. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    PubMed Central

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  3. Activation of auditory cortex by anticipating and hearing emotional sounds: an MEG study.

    PubMed

    Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina

    2013-01-01

    To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.

  4. Activation of Auditory Cortex by Anticipating and Hearing Emotional Sounds: An MEG Study

    PubMed Central

    Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina

    2013-01-01

    To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period. PMID:24278270

  5. Adaptation to Vocal Expressions Reveals Multistep Perception of Auditory Emotion

    PubMed Central

    Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal

    2014-01-01

    The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect. PMID:24920615

  6. Adaptation to vocal expressions reveals multistep perception of auditory emotion.

    PubMed

    Bestelmeyer, Patricia E G; Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal

    2014-06-11

    The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect. Copyright © 2014 Bestelmeyer et al.

  7. Auditory and visual connectivity gradients in frontoparietal cortex

    PubMed Central

    Hellyer, Peter J.; Wise, Richard J. S.; Leech, Robert

    2016-01-01

    Abstract A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal–ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior–anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top–down modulation of modality‐specific information to occur within higher‐order cortex. This could provide a potentially faster and more efficient pathway by which top–down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long‐range connections to sensory cortices. Hum Brain Mapp 38:255–270, 2017. © 2016 Wiley Periodicals, Inc. PMID:27571304

  8. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    PubMed

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  9. Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat.

    PubMed

    Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu

    2013-01-01

    Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.

  10. Deviance detection based on regularity encoding along the auditory hierarchy: electrophysiological evidence in humans.

    PubMed

    Escera, Carles; Leung, Sumie; Grimm, Sabine

    2014-07-01

    Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100-200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20-30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding-rather than on refractoriness-occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.

  11. Auditory-Cortex Short-Term Plasticity Induced by Selective Attention

    PubMed Central

    Jääskeläinen, Iiro P.; Ahveninen, Jyrki

    2014-01-01

    The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458

  12. Auditory stream segregation in monkey auditory cortex: effects of frequency separation, presentation rate, and tone duration

    NASA Astrophysics Data System (ADS)

    Fishman, Yonatan I.; Arezzo, Joseph C.; Steinschneider, Mitchell

    2004-09-01

    Auditory stream segregation refers to the organization of sequential sounds into ``perceptual streams'' reflecting individual environmental sound sources. In the present study, sequences of alternating high and low tones, ``...ABAB...,'' similar to those used in psychoacoustic experiments on stream segregation, were presented to awake monkeys while neural activity was recorded in primary auditory cortex (A1). Tone frequency separation (ΔF), tone presentation rate (PR), and tone duration (TD) were systematically varied to examine whether neural responses correlate with effects of these variables on perceptual stream segregation. ``A'' tones were fixed at the best frequency of the recording site, while ``B'' tones were displaced in frequency from ``A'' tones by an amount=ΔF. As PR increased, ``B'' tone responses decreased in amplitude to a greater extent than ``A'' tone responses, yielding neural response patterns dominated by ``A'' tone responses occurring at half the alternation rate. Increasing TD facilitated the differential attenuation of ``B'' tone responses. These findings parallel psychoacoustic data and suggest a physiological model of stream segregation whereby increasing ΔF, PR, or TD enhances spatial differentiation of ``A'' tone and ``B'' tone responses along the tonotopic map in A1.

  13. Neural effects of cognitive control load on auditory selective attention

    PubMed Central

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R.; Mangalathu, Jain; Desai, Anjali

    2014-01-01

    Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210 msec, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. PMID:24946314

  14. Sensitivity of human auditory cortex to rapid frequency modulation revealed by multivariate representational similarity analysis.

    PubMed

    Joanisse, Marc F; DeSouza, Diedre D

    2014-01-01

    Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl's gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination.

  15. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    PubMed

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information.

  16. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE

    PubMed Central

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2015-01-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information. PMID:25838636

  17. Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    PubMed Central

    Giordano, Bruno L; Ince, Robin A A; Gross, Joachim; Schyns, Philippe G; Panzeri, Stefano; Kayser, Christoph

    2017-01-01

    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI: http://dx.doi.org/10.7554/eLife.24763.001 PMID:28590903

  18. Efficacy of carnitine in treatment of tinnitus: evidence from audiological and MRI measures-a case study.

    PubMed

    Gopal, Kamakshi V; Thomas, Binu P; Mao, Deng; Lu, Hanzhang

    2015-03-01

    Tinnitus, or ringing in the ears, is an extremely common ear disorder. However, it is a phenomenon that is very poorly understood and has limited treatment options. The goals of this case study were to identify if the antioxidant acetyl-L-carnitine (ALCAR) provides relief from tinnitus, and to identify if subjective satisfaction after carnitine treatment is accompanied by changes in audiological and imaging measures. Case Study. A 41-yr-old female with a history of hearing loss and tinnitus was interested in exploring the benefits of antioxidant therapy in reducing her tinnitus. The patient was evaluated using a standard audiological/tinnitus test battery and magnetic resonance imaging (MRI) recordings before carnitine treatment. After her physician's approval, the patient took 500 mg of ALCAR twice a day for 30 consecutive days. The audiological and MRI measures were repeated after ALCAR treatment. Pure-tone audiometry, tympanometry, distortion-product otoacoustic emissions, tinnitus questionnaires (Tinnitus Handicap Inventory and Tinnitus Reaction Questionnaire), auditory brainstem response, functional MRI (fMRI), functional connectivity MRI, and cerebral blood flow evaluations were conducted before intake of ALCAR and were repeated 30 days after ALCAR treatment. The patient's pretreatment pure-tone audiogram indicated a mild sensorineural hearing loss at 6 kHz in the right ear and 4 kHz in the left ear. Posttreatment evaluation indicated marginal improvement in the patient's pure-tone thresholds, but was sufficient to be classified as being clinically normal in both ears. Distortion-product otoacoustic emissions results showed increased overall emissions after ALCAR treatment. Subjective report from the patient indicated that her tinnitus was less annoying and barely noticeable during the day after treatment, and the posttreatment tinnitus questionnaire scores supported her statement. Auditory brainstem response peak V amplitude growth between stimulus intensity levels of 40-80 dB nHL indicated a reduction in growth for the posttreatment condition compared with the pretreatment condition. This was attributed to a possible active gating mechanism involving the auditory brainstem after ALCAR treatment. Posttreatment fMRI recordings in response to acoustic stimuli indicated a statistically significant reduction in brain activity in several regions of the brain, including the auditory cortex. Cerebral blood flow showed increased flow in the auditory cortex after treatment. The functional connectivity MRI indicated increased connectivity between the right and left auditory cortex, but a decrease in connectivity between the auditory cortex and some regions of the "default mode network," namely the medial prefrontal cortex and posterior cingulate cortex. The changes observed in the objective and subjective test measures after ALCAR treatment, along with the patient's personal observations, indicate that carnitine intake may be a valuable pharmacological option in the treatment of tinnitus. American Academy of Audiology.

  19. Frequency preference and attention effects across cortical depths in the human primary auditory cortex.

    PubMed

    De Martino, Federico; Moerel, Michelle; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia

    2015-12-29

    Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.

  20. Bilateral Changes of Spontaneous Activity Within the Central Auditory Pathway Upon Chronic Unilateral Intracochlear Electrical Stimulation.

    PubMed

    Basta, Dietmar; Götze, Romy; Gröschel, Moritz; Jansen, Sebastian; Janke, Oliver; Tzschentke, Barbara; Boyle, Patrick; Ernst, Arne

    2015-12-01

    In recent years, cochlear implants have been applied successfully for the treatment of unilateral hearing loss with quite surprising benefit. One reason for this successful treatment, including the relief from tinnitus, could be the normalization of spontaneous activity in the central auditory pathway because of the electrical stimulation. The present study, therefore, investigated at a cellular level, the effect of a unilateral chronic intracochlear stimulation on key structures of the central auditory pathway. Normal-hearing guinea pigs were mechanically single-sided deafened through a standard HiFocus1j electrode array (on a HiRes 90k cochlear implant) being inserted into the first turn of the cochlea. Four to five electrode contacts could be used for the stimulation. Six weeks after surgery, the speech processor (Auria) was fitted, based on tNRI values and mounted on the animal's back. The two experimental groups were stimulated 16 hours per day for 90 days, using a HiRes strategy based on different stimulation rates (low rate (275 pps/ch), high rate (5000 pps/ch)). The results were compared with those of unilateral deafened controls (implanted but not stimulated), as well as between the treatment groups. All animals experienced a standardized free field auditory environment. The low-rate group showed a significantly lower average spontaneous activity bilaterally in the dorsal cochlear nucleus and the medial geniculate body than the controls. However, there was no difference in the inferior colliculus and the primary auditory cortex. Spontaneous activity of the high-rate group was also reduced bilaterally in the dorsal cochlear nucleus and in the primary auditory cortex. No differences could be observed between the high-rate group and the controls in the contra-lateral inferior colliculus and medial geniculate body. The high-rate group showed bilaterally a higher activity in the CN and the MGB compared with the low-rate group, whereas in the IC and in the AC a trend for an opposite effect could be determined. Unilateral intracochlear electrical stimulation seems to facilitate the homeostasis of the network activity, since it decreases the spontaneous activity that is usually elevated upon deafferentiation. The electrical stimulation per se seems to be responsible for the bilateral changes described above, rather than the particular nature of the electrical stimulation (e.g., rate). The normalization effects of electrical stimulation found in the present study are of particular importance in cochlear implant recipients with single-sided deafness.

  1. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    NASA Astrophysics Data System (ADS)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.

  2. An fMRI study of multimodal selective attention in schizophrenia

    PubMed Central

    Mayer, Andrew R.; Hanlon, Faith M.; Teshiba, Terri M.; Klimaj, Stefan D.; Ling, Josef M.; Dodd, Andrew B.; Calhoun, Vince D.; Bustillo, Juan R.; Toulouse, Trent

    2015-01-01

    Background Studies have produced conflicting evidence regarding whether cognitive control deficits in patients with schizophrenia result from dysfunction within the cognitive control network (CCN; top-down) and/or unisensory cortex (bottom-up). Aims To investigate CCN and sensory cortex involvement during multisensory cognitive control in patients with schizophrenia. Method Patients with schizophrenia and healthy controls underwent functional magnetic resonance imaging while performing a multisensory Stroop task involving auditory and visual distracters. Results Patients with schizophrenia exhibited an overall pattern of response slowing, and these behavioural deficits were associated with a pattern of patient hyperactivation within auditory, sensorimotor and posterior parietal cortex. In contrast, there were no group differences in functional activation within prefrontal nodes of the CCN, with small effect sizes observed (incongruent–congruent trials). Patients with schizophrenia also failed to upregulate auditory cortex with concomitant increased attentional demands. Conclusions Results suggest a prominent role for dysfunction within auditory, sensorimotor and parietal areas relative to prefrontal CCN nodes during multisensory cognitive control. PMID:26382953

  3. Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.

    PubMed

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2015-03-01

    This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Evaluation of Techniques Used to Estimate Cortical Feature Maps

    PubMed Central

    Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2011-01-01

    Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537

  5. Strain differences of the effect of enucleation and anophthalmia on the size and growth of sensory cortices in mice.

    PubMed

    Massé, Ian O; Guillemette, Sonia; Laramée, Marie-Eve; Bronchti, Gilles; Boire, Denis

    2014-11-07

    Anophthalmia is a condition in which the eye does not develop from the early embryonic period. Early blindness induces cross-modal plastic modifications in the brain such as auditory and haptic activations of the visual cortex and also leads to a greater solicitation of the somatosensory and auditory cortices. The visual cortex is activated by auditory stimuli in anophthalmic mice and activity is known to alter the growth pattern of the cerebral cortex. The size of the primary visual, auditory and somatosensory cortices and of the corresponding specific sensory thalamic nuclei were measured in intact and enucleated C57Bl/6J mice and in ZRDCT anophthalmic mice (ZRDCT/An) to evaluate the contribution of cross-modal activity on the growth of the cerebral cortex. In addition, the size of these structures were compared in intact, enucleated and anophthalmic fourth generation backcrossed hybrid C57Bl/6J×ZRDCT/An mice to parse out the effects of mouse strains and of the different visual deprivations. The visual cortex was smaller in the anophthalmic ZRDCT/An than in the intact and enucleated C57Bl/6J mice. Also the auditory cortex was larger and the somatosensory cortex smaller in the ZRDCT/An than in the intact and enucleated C57Bl/6J mice. The size differences of sensory cortices between the enucleated and anophthalmic mice were no longer present in the hybrid mice, showing specific genetic differences between C57Bl/6J and ZRDCT mice. The post natal size increase of the visual cortex was less in the enucleated than in the anophthalmic and intact hybrid mice. This suggests differences in the activity of the visual cortex between enucleated and anophthalmic mice and that early in-utero spontaneous neural activity in the visual system contributes to the shaping of functional properties of cortical networks. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A selective impairment of perception of sound motion direction in peripheral space: A case study.

    PubMed

    Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C

    2016-01-08

    It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    PubMed

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Modulation of Auditory Cortex Response to Pitch Variation Following Training with Microtonal Melodies

    PubMed Central

    Zatorre, Robert J.; Delhommeau, Karine; Zarate, Jean Mary

    2012-01-01

    We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019

  9. A bilateral cortical network responds to pitch perturbations in speech feedback

    PubMed Central

    Kort, Naomi S.; Nagarajan, Srikantan S.; Houde, John F.

    2014-01-01

    Auditory feedback is used to monitor and correct for errors in speech production, and one of the clearest demonstrations of this is the pitch perturbation reflex. During ongoing phonation, speakers respond rapidly to shifts of the pitch of their auditory feedback, altering their pitch production to oppose the direction of the applied pitch shift. In this study, we examine the timing of activity within a network of brain regions thought to be involved in mediating this behavior. To isolate auditory feedback processing relevant for motor control of speech, we used magnetoencephalography (MEG) to compare neural responses to speech onset and to transient (400ms) pitch feedback perturbations during speaking with responses to identical acoustic stimuli during passive listening. We found overlapping, but distinct bilateral cortical networks involved in monitoring speech onset and feedback alterations in ongoing speech. Responses to speech onset during speaking were suppressed in bilateral auditory and left ventral supramarginal gyrus/posterior superior temporal sulcus (vSMG/pSTS). In contrast, during pitch perturbations, activity was enhanced in bilateral vSMG/pSTS, bilateral premotor cortex, right primary auditory cortex, and left higher order auditory cortex. We also found speaking-induced delays in responses to both unaltered and altered speech in bilateral primary and secondary auditory regions, the left vSMG/pSTS and right premotor cortex. The network dynamics reveal the cortical processing involved in both detecting the speech error and updating the motor plan to create the new pitch output. These results implicate vSMG/pSTS as critical in both monitoring auditory feedback and initiating rapid compensation to feedback errors. PMID:24076223

  10. Acquired word deafness, and the temporal grain of sound representation in the primary auditory cortex.

    PubMed

    Phillips, D P; Farmer, M E

    1990-11-15

    This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.

  11. Brain Metabolism during Hallucination-Like Auditory Stimulation in Schizophrenia

    PubMed Central

    Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C.; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard

    2014-01-01

    Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia. PMID:24416328

  12. Steady-state MEG responses elicited by a sequence of amplitude-modulated short tones of different carrier frequencies.

    PubMed

    Kuriki, Shinya; Kobayashi, Yusuke; Kobayashi, Takanari; Tanaka, Keita; Uchikawa, Yoshinori

    2013-02-01

    The auditory steady-state response (ASSR) is a weak potential or magnetic response elicited by periodic acoustic stimuli with a maximum response at about a 40-Hz periodicity. In most previous studies using amplitude-modulated (AM) tones of stimulus sound, long lasting tones of more than 10 s in length were used. However, characteristics of the ASSR elicited by short AM tones have remained unclear. In this study, we examined magnetoencephalographic (MEG) ASSR using a sequence of sinusoidal AM tones of 0.78 s in length with various tone frequencies of 440-990 Hz in about one octave variation. It was found that the amplitude of the ASSR was invariant with tone frequencies when the level of sound pressure was adjusted along an equal-loudness curve. The amplitude also did not depend on the existence of preceding tone or difference in frequency of the preceding tone. When the sound level of AM tones was changed with tone frequencies in the same range of 440-990 Hz, the amplitude of ASSR varied in a proportional manner to the sound level. These characteristics are favorable for the use of ASSR in studying temporal processing of auditory information in the auditory cortex. The lack of adaptation in the ASSR elicited by a sequence of short tones may be ascribed to the neural activity of widely accepted generator of magnetic ASSR in the primary auditory cortex. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates

    PubMed Central

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-01-01

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys. DOI: http://dx.doi.org/10.7554/eLife.15441.001 PMID:27438411

  14. Visual attention modulates brain activation to angry voices.

    PubMed

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  15. Effects of noise-induced hearing loss on parvalbumin and perineuronal net expression in the mouse primary auditory cortex.

    PubMed

    Nguyen, Anna; Khaleel, Haroun M; Razak, Khaleel A

    2017-07-01

    Noise induced hearing loss is associated with increased excitability in the central auditory system but the cellular correlates of such changes remain to be characterized. Here we tested the hypothesis that noise-induced hearing loss causes deterioration of perineuronal nets (PNNs) in the auditory cortex of mice. PNNs are specialized extracellular matrix components that commonly enwrap cortical parvalbumin (PV) containing GABAergic interneurons. Compared to somatosensory and visual cortex, relatively less is known about PV/PNN expression patterns in the primary auditory cortex (A1). Whether changes to cortical PNNs follow acoustic trauma remains unclear. The first aim of this study was to characterize PV/PNN expression in A1 of adult mice. PNNs increase excitability of PV+ inhibitory neurons and confer protection to these neurons against oxidative stress. Decreased PV/PNN expression may therefore lead to a reduction in cortical inhibition. The second aim of this study was to examine PV/PNN expression in superficial (I-IV) and deep cortical layers (V-VI) following noise trauma. Exposing mice to loud noise caused an increase in hearing threshold that lasted at least 30 days. PV and PNN expression in A1 was analyzed at 1, 10 and 30 days following the exposure. No significant changes were observed in the density of PV+, PNN+, or PV/PNN co-localized cells following hearing loss. However, a significant layer- and cell type-specific decrease in PNN intensity was seen following hearing loss. Some changes were present even at 1 day following noise exposure. Attenuation of PNN may contribute to changes in excitability in cortex following noise trauma. The regulation of PNN may open up a temporal window for altered excitability in the adult brain that is then stabilized at a new and potentially pathological level such as in tinnitus. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Domain-specific impairment of source memory following a right posterior medial temporal lobe lesion.

    PubMed

    Peters, Jan; Koch, Benno; Schwarz, Michael; Daum, Irene

    2007-01-01

    This single case analysis of memory performance in a patient with an ischemic lesion affecting posterior but not anterior right medial temporal lobe (MTL) indicates that source memory can be disrupted in a domain-specific manner. The patient showed normal recognition memory for gray-scale photos of objects (visual condition) and spoken words (auditory condition). While memory for visual source (texture/color of the background against which pictures appeared) was within the normal range, auditory source memory (male/female speaker voice) was at chance level, a performance pattern significantly different from the control group. This dissociation is consistent with recent fMRI evidence of anterior/posterior MTL dissociations depending upon the nature of source information (visual texture/color vs. auditory speaker voice). The findings are in good agreement with the view of dissociable memory processing by the perirhinal cortex (anterior MTL) and parahippocampal cortex (posterior MTL), depending upon the neocortical input that these regions receive. (c) 2007 Wiley-Liss, Inc.

  17. Multimodal lexical processing in auditory cortex is literacy skill dependent.

    PubMed

    McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R

    2014-09-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.

    PubMed

    Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D; Kerlin, Jess R

    2018-02-14

    Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ ( illusion-fa ), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ ( illusion-ba ), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba , and a reduced N1 when they perceived illusion-fa , mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator). Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.

  19. Neural evidence for predictive coding in auditory cortex during speech production.

    PubMed

    Okada, Kayoko; Matchin, William; Hickok, Gregory

    2018-02-01

    Recent models of speech production suggest that motor commands generate forward predictions of the auditory consequences of those commands, that these forward predications can be used to monitor and correct speech output, and that this system is hierarchically organized (Hickok, Houde, & Rong, Neuron, 69(3), 407--422, 2011; Pickering & Garrod, Behavior and Brain Sciences, 36(4), 329--347, 2013). Recent psycholinguistic research has shown that internally generated speech (i.e., imagined speech) produces different types of errors than does overt speech (Oppenheim & Dell, Cognition, 106(1), 528--537, 2008; Oppenheim & Dell, Memory & Cognition, 38(8), 1147-1160, 2010). These studies suggest that articulated speech might involve predictive coding at additional levels than imagined speech. The current fMRI experiment investigates neural evidence of predictive coding in speech production. Twenty-four participants from UC Irvine were recruited for the study. Participants were scanned while they were visually presented with a sequence of words that they reproduced in sync with a visual metronome. On each trial, they were cued to either silently articulate the sequence or to imagine the sequence without overt articulation. As expected, silent articulation and imagined speech both engaged a left hemisphere network previously implicated in speech production. A contrast of silent articulation with imagined speech revealed greater activation for articulated speech in inferior frontal cortex, premotor cortex and the insula in the left hemisphere, consistent with greater articulatory load. Although both conditions were silent, this contrast also produced significantly greater activation in auditory cortex in dorsal superior temporal gyrus in both hemispheres. We suggest that these activations reflect forward predictions arising from additional levels of the perceptual/motor hierarchy that are involved in monitoring the intended speech output.

  20. Serial and Parallel Processing in the Primate Auditory Cortex Revisited

    PubMed Central

    Recanzone, Gregg H.; Cohen, Yale E.

    2009-01-01

    Over a decade ago it was proposed that the primate auditory cortex is organized in a serial and parallel manner in which there is a dorsal stream processing spatial information and a ventral stream processing non-spatial information. This organization is similar to the “what”/“where” processing of the primate visual cortex. This review will examine several key studies, primarily electrophysiological, that have tested this hypothesis. We also review several human imaging studies that have attempted to define these processing streams in the human auditory cortex. While there is good evidence that spatial information is processed along a particular series of cortical areas, the support for a non-spatial processing stream is not as strong. Why this should be the case and how to better test this hypothesis is also discussed. PMID:19686779

  1. The binaural masking level difference: cortical correlates persist despite severe brain stem atrophy in progressive supranuclear palsy

    PubMed Central

    Rowe, James B.; Ghosh, Boyd C. P.; Carlyon, Robert P.; Plack, Christopher J.; Gockel, Hedwig E.

    2014-01-01

    Under binaural listening conditions, the detection of target signals within background masking noise is substantially improved when the interaural phase of the target differs from that of the masker. Neural correlates of this binaural masking level difference (BMLD) have been observed in the inferior colliculus and temporal cortex, but it is not known whether degeneration of the inferior colliculus would result in a reduction of the BMLD in humans. We used magnetoencephalography to examine the BMLD in 13 healthy adults and 13 patients with progressive supranuclear palsy (PSP). PSP is associated with severe atrophy of the upper brain stem, including the inferior colliculus, confirmed by voxel-based morphometry of structural MRI. Stimuli comprised in-phase sinusoidal tones presented to both ears at three levels (high, medium, and low) masked by in-phase noise, which rendered the low-level tone inaudible. Critically, the BMLD was measured using a low-level tone presented in opposite phase across ears, making it audible against the noise. The cortical waveforms from bilateral auditory sources revealed significantly larger N1m peaks for the out-of-phase low-level tone compared with the in-phase low-level tone, for both groups, indicating preservation of early cortical correlates of the BMLD in PSP. In PSP a significant delay was observed in the onset of the N1m deflection and the amplitude of the P2m was reduced, but these differences were not restricted to the BMLD condition. The results demonstrate that although PSP causes subtle auditory deficits, binaural processing can survive the presence of significant damage to the upper brain stem. PMID:25231610

  2. The binaural masking level difference: cortical correlates persist despite severe brain stem atrophy in progressive supranuclear palsy.

    PubMed

    Hughes, Laura E; Rowe, James B; Ghosh, Boyd C P; Carlyon, Robert P; Plack, Christopher J; Gockel, Hedwig E

    2014-12-15

    Under binaural listening conditions, the detection of target signals within background masking noise is substantially improved when the interaural phase of the target differs from that of the masker. Neural correlates of this binaural masking level difference (BMLD) have been observed in the inferior colliculus and temporal cortex, but it is not known whether degeneration of the inferior colliculus would result in a reduction of the BMLD in humans. We used magnetoencephalography to examine the BMLD in 13 healthy adults and 13 patients with progressive supranuclear palsy (PSP). PSP is associated with severe atrophy of the upper brain stem, including the inferior colliculus, confirmed by voxel-based morphometry of structural MRI. Stimuli comprised in-phase sinusoidal tones presented to both ears at three levels (high, medium, and low) masked by in-phase noise, which rendered the low-level tone inaudible. Critically, the BMLD was measured using a low-level tone presented in opposite phase across ears, making it audible against the noise. The cortical waveforms from bilateral auditory sources revealed significantly larger N1m peaks for the out-of-phase low-level tone compared with the in-phase low-level tone, for both groups, indicating preservation of early cortical correlates of the BMLD in PSP. In PSP a significant delay was observed in the onset of the N1m deflection and the amplitude of the P2m was reduced, but these differences were not restricted to the BMLD condition. The results demonstrate that although PSP causes subtle auditory deficits, binaural processing can survive the presence of significant damage to the upper brain stem. Copyright © 2014 the American Physiological Society.

  3. CNS BOLD fMRI effects of sham-controlled transcutaneous electrical nerve stimulation in the left outer auditory canal - a pilot study.

    PubMed

    Kraus, Thomas; Kiess, Olga; Hösl, Katharina; Terekhin, Pavel; Kornhuber, Johannes; Forster, Clemens

    2013-09-01

    It has recently been shown that electrical stimulation of sensory afferents within the outer auditory canal may facilitate a transcutaneous form of central nervous system stimulation. Functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) effects in limbic and temporal structures have been detected in two independent studies. In the present study, we investigated BOLD fMRI effects in response to transcutaneous electrical stimulation of two different zones in the left outer auditory canal. It is hypothesized that different central nervous system (CNS) activation patterns might help to localize and specifically stimulate auricular cutaneous vagal afferents. 16 healthy subjects aged between 20 and 37 years were divided into two groups. 8 subjects were stimulated in the anterior wall, the other 8 persons received transcutaneous vagus nervous stimulation (tVNS) at the posterior side of their left outer auditory canal. For sham control, both groups were also stimulated in an alternating manner on their corresponding ear lobe, which is generally known to be free of cutaneous vagal innervation. Functional MR data from the cortex and brain stem level were collected and a group analysis was performed. In most cortical areas, BOLD changes were in the opposite direction when comparing anterior vs. posterior stimulation of the left auditory canal. The only exception was in the insular cortex, where both stimulation types evoked positive BOLD changes. Prominent decreases of the BOLD signals were detected in the parahippocampal gyrus, posterior cingulate cortex and right thalamus (pulvinar) following anterior stimulation. In subcortical areas at brain stem level, a stronger BOLD decrease as compared with sham stimulation was found in the locus coeruleus and the solitary tract only during stimulation of the anterior part of the auditory canal. The results of the study are in line with previous fMRI studies showing robust BOLD signal decreases in limbic structures and the brain stem during electrical stimulation of the left anterior auditory canal. BOLD signal decreases in the area of the nuclei of the vagus nerve may indicate an effective stimulation of vagal afferences. In contrast, stimulation at the posterior wall seems to lead to unspecific changes of the BOLD signal within the solitary tract, which is a key relay station of vagal neurotransmission. The results of the study show promise for a specific novel method of cranial nerve stimulation and provide a basis for further developments and applications of non-invasive transcutaneous vagus stimulation in psychiatric patients. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Combined diffusion-weighted and functional magnetic resonance imaging reveals a temporal-occipital network involved in auditory-visual object processing

    PubMed Central

    Beer, Anton L.; Plank, Tina; Meyer, Georg; Greenlee, Mark W.

    2013-01-01

    Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex. PMID:23407860

  5. The neural consequences of age-related hearing loss

    PubMed Central

    Peelle, Jonathan E.; Wingfield, Arthur

    2016-01-01

    During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension. PMID:27262177

  6. Dual-Pitch Processing Mechanisms in Primate Auditory Cortex

    PubMed Central

    Bendor, Daniel; Osmanski, Michael S.

    2012-01-01

    Pitch, our perception of how high or low a sound is on a musical scale, is a fundamental perceptual attribute of sounds and is important for both music and speech. After more than a century of research, the exact mechanisms used by the auditory system to extract pitch are still being debated. Theoretically, pitch can be computed using either spectral or temporal acoustic features of a sound. We have investigated how cues derived from the temporal envelope and spectrum of an acoustic signal are used for pitch extraction in the common marmoset (Callithrix jacchus), a vocal primate species, by measuring pitch discrimination behaviorally and examining pitch-selective neuronal responses in auditory cortex. We find that pitch is extracted by marmosets using temporal envelope cues for lower pitch sounds composed of higher-order harmonics, whereas spectral cues are used for higher pitch sounds with lower-order harmonics. Our data support dual-pitch processing mechanisms, originally proposed by psychophysicists based on human studies, whereby pitch is extracted using a combination of temporal envelope and spectral cues. PMID:23152599

  7. Voxel-based morphometry of auditory and speech-related cortex in stutterers.

    PubMed

    Beal, Deryk S; Gracco, Vincent L; Lafaille, Sophie J; De Nil, Luc F

    2007-08-06

    Stutterers demonstrate unique functional neural activation patterns during speech production, including reduced auditory activation, relative to nonstutterers. The extent to which these functional differences are accompanied by abnormal morphology of the brain in stutterers is unclear. This study examined the neuroanatomical differences in speech-related cortex between stutterers and nonstutterers using voxel-based morphometry. Results revealed significant differences in localized grey matter and white matter densities of left and right hemisphere regions involved in auditory processing and speech production.

  8. Development of visual category selectivity in ventral visual cortex does not require visual experience

    PubMed Central

    van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.

    2017-01-01

    To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127

  9. Linear Stimulus-Invariant Processing and Spectrotemporal Reverse Correlation in Primary Auditory Cortex

    DTIC Science & Technology

    2003-01-01

    stability. The ectosylvian gyrus, which includes the primary auditory cortex, was exposed by craniotomy and the dura was reflected. The contralateral... awake monkey. Journal Revista de Acustica, 33:84–87985–06–8. Victor, J. and Knight, B. (1979). Nonlinear analysis with an arbitrary stimulus ensemble

  10. Spatial band-pass filtering aids decoding musical genres from auditory cortex 7T fMRI.

    PubMed

    Sengupta, Ayan; Pollmann, Stefan; Hanke, Michael

    2018-01-01

    Spatial filtering strategies, combined with multivariate decoding analysis of BOLD images, have been used to investigate the nature of the neural signal underlying the discriminability of brain activity patterns evoked by sensory stimulation -- primarily in the visual cortex. Reported evidence indicates that such signals are spatially broadband in nature, and are not primarily comprised of fine-grained activation patterns. However, it is unclear whether this is a general property of the BOLD signal, or whether it is specific to the details of employed analyses and stimuli. Here we performed an analysis of publicly available, high-resolution 7T fMRI on the response BOLD response to musical genres in primary auditory cortex that matches a previously conducted study on decoding visual orientation from V1.  The results show that the pattern of decoding accuracies with respect to different types and levels of spatial filtering is comparable to that obtained from V1, despite considerable differences in the respective cortical circuitry.

  11. Cortical contributions to the auditory frequency-following response revealed by MEG

    PubMed Central

    Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.

    2016-01-01

    The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409

  12. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey

    PubMed Central

    Scott, Brian H.; Leccese, Paul A.; Saleem, Kadharbatcha S.; Kikuchi, Yukiko; Mullarkey, Matthew P.; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C.

    2017-01-01

    Abstract In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. PMID:26620266

  13. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  14. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    PubMed

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  15. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions

    PubMed Central

    Intskirveli, Irakli

    2017-01-01

    Abstract Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral “notch” of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing. PMID:28660244

  16. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions.

    PubMed

    Askew, Caitlin; Intskirveli, Irakli; Metherate, Raju

    2017-01-01

    Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral "notch" of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing.

  17. Cortical Inhibition Reduces Information Redundancy at Presentation of Communication Sounds in the Primary Auditory Cortex

    PubMed Central

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris

    2013-01-01

    In all sensory modalities, intracortical inhibition shapes the functional properties of cortical neurons but also influences the responses to natural stimuli. Studies performed in various species have revealed that auditory cortex neurons respond to conspecific vocalizations by temporal spike patterns displaying a high trial-to-trial reliability, which might result from precise timing between excitation and inhibition. Studying the guinea pig auditory cortex, we show that partial blockage of GABAA receptors by gabazine (GBZ) application (10 μm, a concentration that promotes expansion of cortical receptive fields) increased the evoked firing rate and the spike-timing reliability during presentation of communication sounds (conspecific and heterospecific vocalizations), whereas GABAB receptor antagonists [10 μm saclofen; 10–50 μm CGP55845 (p-3-aminopropyl-p-diethoxymethyl phosphoric acid)] had nonsignificant effects. Computing mutual information (MI) from the responses to vocalizations using either the evoked firing rate or the temporal spike patterns revealed that GBZ application increased the MI derived from the activity of single cortical site but did not change the MI derived from population activity. In addition, quantification of information redundancy showed that GBZ significantly increased redundancy at the population level. This result suggests that a potential role of intracortical inhibition is to reduce information redundancy during the processing of natural stimuli. PMID:23804094

  18. The steady-state response of the cerebral cortex to the beat of music reflects both the comprehension of music and attention

    PubMed Central

    Meltzer, Benjamin; Reichenbach, Chagit S.; Braiman, Chananel; Schiff, Nicholas D.; Hudspeth, A. J.; Reichenbach, Tobias

    2015-01-01

    The brain’s analyses of speech and music share a range of neural resources and mechanisms. Music displays a temporal structure of complexity similar to that of speech, unfolds over comparable timescales, and elicits cognitive demands in tasks involving comprehension and attention. During speech processing, synchronized neural activity of the cerebral cortex in the delta and theta frequency bands tracks the envelope of a speech signal, and this neural activity is modulated by high-level cortical functions such as speech comprehension and attention. It remains unclear, however, whether the cortex also responds to the natural rhythmic structure of music and how the response, if present, is influenced by higher cognitive processes. Here we employ electroencephalography to show that the cortex responds to the beat of music and that this steady-state response reflects musical comprehension and attention. We show that the cortical response to the beat is weaker when subjects listen to a familiar tune than when they listen to an unfamiliar, non-sensical musical piece. Furthermore, we show that in a task of intermodal attention there is a larger neural response at the beat frequency when subjects attend to a musical stimulus than when they ignore the auditory signal and instead focus on a visual one. Our findings may be applied in clinical assessments of auditory processing and music cognition as well as in the construction of auditory brain-machine interfaces. PMID:26300760

  19. Neuroimaging and Neuromodulation: Complementary Approaches for Identifying the Neuronal Correlates of Tinnitus

    PubMed Central

    Langguth, Berthold; Schecklmann, Martin; Lehner, Astrid; Landgrebe, Michael; Poeppl, Timm Benjamin; Kreuzer, Peter Michal; Schlee, Winfried; Weisz, Nathan; Vanneste, Sven; De Ridder, Dirk

    2012-01-01

    An inherent limitation of functional imaging studies is their correlational approach. More information about critical contributions of specific brain regions can be gained by focal transient perturbation of neural activity in specific regions with non-invasive focal brain stimulation methods. Functional imaging studies have revealed that tinnitus is related to alterations in neuronal activity of central auditory pathways. Modulation of neuronal activity in auditory cortical areas by repetitive transcranial magnetic stimulation (rTMS) can reduce tinnitus loudness and, if applied repeatedly, exerts therapeutic effects, confirming the relevance of auditory cortex activation for tinnitus generation and persistence. Measurements of oscillatory brain activity before and after rTMS demonstrate that the same stimulation protocol has different effects on brain activity in different patients, presumably related to interindividual differences in baseline activity in the clinically heterogeneous study cohort. In addition to alterations in auditory pathways, imaging techniques also indicate the involvement of non-auditory brain areas, such as the fronto-parietal “awareness” network and the non-tinnitus-specific distress network consisting of the anterior cingulate cortex, anterior insula, and amygdale. Involvement of the hippocampus and the parahippocampal region putatively reflects the relevance of memory mechanisms in the persistence of the phantom percept and the associated distress. Preliminary studies targeting the dorsolateral prefrontal cortex, the dorsal anterior cingulate cortex, and the parietal cortex with rTMS and with transcranial direct current stimulation confirm the relevance of the mentioned non-auditory networks. Available data indicate the important value added by brain stimulation as a complementary approach to neuroimaging for identifying the neuronal correlates of the various clinical aspects of tinnitus. PMID:22509155

  20. Neural effects of cognitive control load on auditory selective attention.

    PubMed

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali

    2014-08-01

    Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Human auditory evoked potentials. I - Evaluation of components

    NASA Technical Reports Server (NTRS)

    Picton, T. W.; Hillyard, S. A.; Krausz, H. I.; Galambos, R.

    1974-01-01

    Fifteen distinct components can be identified in the scalp recorded average evoked potential to an abrupt auditory stimulus. The early components occurring in the first 8 msec after a stimulus represent the activation of the cochlea and the auditory nuclei of the brainstem. The middle latency components occurring between 8 and 50 msec after the stimulus probably represent activation of both auditory thalamus and cortex but can be seriously contaminated by concurrent scalp muscle reflex potentials. The longer latency components occurring between 50 and 300 msec after the stimulus are maximally recorded over fronto-central scalp regions and seem to represent widespread activation of frontal cortex.

  2. Scanning silence: mental imagery of complex sounds.

    PubMed

    Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz

    2005-07-15

    In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.

  3. Dynamics of Electrocorticographic (ECoG) Activity in Human Temporal and Frontal Cortical Areas During Music Listening

    DTIC Science & Technology

    2012-04-14

    flow or electrical activity in the primary auditory cortex and sound intensity level. Other studies (Brechmann et al., 2002; Hart et al., 2003; Tanji et...duration. Decoding of per- ceived loudness from brain signals may have important applications for the calibration of stimulation levels of cochlear implants

  4. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    PubMed

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  5. Evidence for cue-independent spatial representation in the human auditory cortex during active listening

    PubMed Central

    McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher

    2017-01-01

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357

  6. Electrical Stimulation of the Ear, Head, Cranial Nerve, or Cortex for the Treatment of Tinnitus: A Scoping Review

    PubMed Central

    Adjamian, Peyman

    2016-01-01

    Tinnitus is defined as the perception of sound in the absence of an external source. It is often associated with hearing loss and is thought to result from abnormal neural activity at some point or points in the auditory pathway, which is incorrectly interpreted by the brain as an actual sound. Neurostimulation therapies therefore, which interfere on some level with that abnormal activity, are a logical approach to treatment. For tinnitus, where the pathological neuronal activity might be associated with auditory and other areas of the brain, interventions using electromagnetic, electrical, or acoustic stimuli separately, or paired electrical and acoustic stimuli, have been proposed as treatments. Neurostimulation therapies should modulate neural activity to deliver a permanent reduction in tinnitus percept by driving the neuroplastic changes necessary to interrupt abnormal levels of oscillatory cortical activity and restore typical levels of activity. This change in activity should alter or interrupt the tinnitus percept (reduction or extinction) making it less bothersome. Here we review developments in therapies involving electrical stimulation of the ear, head, cranial nerve, or cortex in the treatment of tinnitus which demonstrably, or are hypothesised to, interrupt pathological neuronal activity in the cortex associated with tinnitus. PMID:27403346

  7. A Non-canonical Reticular-Limbic Central Auditory Pathway via Medial Septum Contributes to Fear Conditioning.

    PubMed

    Zhang, Guang-Wei; Sun, Wen-Jian; Zingg, Brian; Shen, Li; He, Jufang; Xiong, Ying; Tao, Huizhong W; Zhang, Li I

    2018-01-17

    In the mammalian brain, auditory information is known to be processed along a central ascending pathway leading to auditory cortex (AC). Whether there exist any major pathways beyond this canonical auditory neuraxis remains unclear. In awake mice, we found that auditory responses in entorhinal cortex (EC) cannot be explained by a previously proposed relay from AC based on response properties. By combining anatomical tracing and optogenetic/pharmacological manipulations, we discovered that EC received auditory input primarily from the medial septum (MS), rather than AC. A previously uncharacterized auditory pathway was then revealed: it branched from the cochlear nucleus, and via caudal pontine reticular nucleus, pontine central gray, and MS, reached EC. Neurons along this non-canonical auditory pathway responded selectively to high-intensity broadband noise, but not pure tones. Disruption of the pathway resulted in an impairment of specifically noise-cued fear conditioning. This reticular-limbic pathway may thus function in processing aversive acoustic signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. The Non-Lemniscal Auditory Cortex in Ferrets: Convergence of Corticotectal Inputs in the Superior Colliculus

    PubMed Central

    Bajo, Victoria M.; Nodal, Fernando R.; Bizley, Jennifer K.; King, Andrew J.

    2010-01-01

    Descending cortical inputs to the superior colliculus (SC) contribute to the unisensory response properties of the neurons found there and are critical for multisensory integration. However, little is known about the relative contribution of different auditory cortical areas to this projection or the distribution of their terminals in the SC. We characterized this projection in the ferret by injecting tracers in the SC and auditory cortex. Large pyramidal neurons were labeled in layer V of different parts of the ectosylvian gyrus after tracer injections in the SC. Those cells were most numerous in the anterior ectosylvian gyrus (AEG), and particularly in the anterior ventral field, which receives both auditory and visual inputs. Labeling was also found in the posterior ectosylvian gyrus (PEG), predominantly in the tonotopically organized posterior suprasylvian field. Profuse anterograde labeling was present in the SC following tracer injections at the site of acoustically responsive neurons in the AEG or PEG, with terminal fields being both more prominent and clustered for inputs originating from the AEG. Terminals from both cortical areas were located throughout the intermediate and deep layers, but were most concentrated in the posterior half of the SC, where peripheral stimulus locations are represented. No inputs were identified from primary auditory cortical areas, although some labeling was found in the surrounding sulci. Our findings suggest that higher level auditory cortical areas, including those involved in multisensory processing, may modulate SC function via their projections into its deeper layers. PMID:20640247

  9. Human amygdala activation by the sound produced during dental treatment: A fMRI study.

    PubMed

    Yu, Jen-Fang; Lee, Kun-Che; Hong, Hsiang-Hsi; Kuo, Song-Bor; Wu, Chung-De; Wai, Yau-Yau; Chen, Yi-Fen; Peng, Ying-Chin

    2015-01-01

    During dental treatments, patients may experience negative emotions associated with the procedure. This study was conducted with the aim of using functional magnetic resonance imaging (fMRI) to visualize cerebral cortical stimulation among dental patients in response to auditory stimuli produced by ultrasonic scaling and power suction equipment. Subjects (n = 7) aged 23-35 years were recruited for this study. All were right-handed and underwent clinical pure-tone audiometry testing to reveal a normal hearing threshold below 20 dB hearing level (HL). As part of the study, subjects initially underwent a dental calculus removal treatment. During the treatment, subjects were exposed to ultrasonic auditory stimuli originating from the scaling handpiece and salivary suction instruments. After dental treatment, subjects were imaged with fMRI while being exposed to recordings of the noise from the same dental instrument so that cerebral cortical stimulation in response to aversive auditory stimulation could be observed. The independent sample confirmatory t-test was used. Subjects also showed stimulation in the amygdala and prefrontal cortex, indicating that the ultrasonic auditory stimuli elicited an unpleasant response in the subjects. Patients experienced unpleasant sensations caused by contact stimuli in the treatment procedure. In addition, this study has demonstrated that aversive auditory stimuli such as sounds from the ultrasonic scaling handpiece also cause aversive emotions. This study was indicated by observed stimulation of the auditory cortex as well as the amygdala, indicating that noise from the ultrasonic scaling handpiece was perceived as an aversive auditory stimulus by the subjects. Subjects can experience unpleasant sensations caused by the sounds from the ultrasonic scaling handpiece based on their auditory stimuli.

  10. Human amygdala activation by the sound produced during dental treatment: A fMRI study

    PubMed Central

    Yu, Jen-Fang; Lee, Kun-Che; Hong, Hsiang-Hsi; Kuo, Song-Bor; Wu, Chung-De; Wai, Yau-Yau; Chen, Yi-Fen; Peng, Ying-Chin

    2015-01-01

    During dental treatments, patients may experience negative emotions associated with the procedure. This study was conducted with the aim of using functional magnetic resonance imaging (fMRI) to visualize cerebral cortical stimulation among dental patients in response to auditory stimuli produced by ultrasonic scaling and power suction equipment. Subjects (n = 7) aged 23-35 years were recruited for this study. All were right-handed and underwent clinical pure-tone audiometry testing to reveal a normal hearing threshold below 20 dB hearing level (HL). As part of the study, subjects initially underwent a dental calculus removal treatment. During the treatment, subjects were exposed to ultrasonic auditory stimuli originating from the scaling handpiece and salivary suction instruments. After dental treatment, subjects were imaged with fMRI while being exposed to recordings of the noise from the same dental instrument so that cerebral cortical stimulation in response to aversive auditory stimulation could be observed. The independent sample confirmatory t-test was used. Subjects also showed stimulation in the amygdala and prefrontal cortex, indicating that the ultrasonic auditory stimuli elicited an unpleasant response in the subjects. Patients experienced unpleasant sensations caused by contact stimuli in the treatment procedure. In addition, this study has demonstrated that aversive auditory stimuli such as sounds from the ultrasonic scaling handpiece also cause aversive emotions. This study was indicated by observed stimulation of the auditory cortex as well as the amygdala, indicating that noise from the ultrasonic scaling handpiece was perceived as an aversive auditory stimulus by the subjects. Subjects can experience unpleasant sensations caused by the sounds from the ultrasonic scaling handpiece based on their auditory stimuli. PMID:26356376

  11. Long-term effects of repetitive transcranial magnetic stimulation (rTMS) in patients with chronic tinnitus.

    PubMed

    Kleinjung, Tobias; Eichhammer, Peter; Langguth, Berthold; Jacob, Peter; Marienhagen, Joerg; Hajak, Goeran; Wolf, Stephan R; Strutz, Juergen

    2005-04-01

    The pathophysiologic mechanisms of idiopathic tinnitus remain unclear. Recent studies demonstrated focal brain activation in the auditory cortex of patients with chronic tinnitus. Low-frequency repetitive transcranial magnetic stimulation (rTMS) is able to reduce cortical hyperexcitability. Fusing of the individual PET-scan with the structural MRI-scan (T1, MPRAGE) allowed us to identify exactly the area of increased metabolic activity in the auditory cortex of patients with chronic tinnitus. With the use of a neuronavigational system, this target area was exactly stimulated by the figure 8-shaped magnetic coil. In a prospective study, rTMS (110% motor threshold; 1 Hz; 2000 stimuli/day over 5 days) was performed using a placebo controlled cross-over design. Patients were blinded regarding the stimulus condition. For the sham stimulation a specific sham-coil system was used. Fourteen patients were followed for 6 months. Treatment outcome was assessed with a specific tinnitus questionnaire (Goebel and Hiller). Tertiary referral medical center. Increased metabolic activation in the auditory cortex was verified in all patients. After 5 days of verum rTMS, a highly significant improvement of the tinnitus score was found whereas the sham treatment did not show any significant changes. The treatment outcome after 6 months still demonstrated significant reduction of tinnitus score. These preliminary results demonstrate that neuronavigated rTMS offers new possibilities in the understanding and treatment of chronic tinnitus.

  12. Predictive cues for auditory stream formation in humans and monkeys.

    PubMed

    Aggelopoulos, Nikolaos C; Deike, Susann; Selezneva, Elena; Scheich, Henning; Brechmann, André; Brosch, Michael

    2017-12-18

    Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60-140 ms post-stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  14. Fundamental deficits of auditory perception in Wernicke's aphasia.

    PubMed

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Bioacoustic Signal Classification in Cat Auditory Cortex

    DTIC Science & Technology

    1994-01-01

    for fast FM sweeps. A second maximum (i.e., sub- In Fig. 8D (87-001) the orie.-tation of the mapped area Iwo 11 .MWRN NOWO 0 lo 74 was tilted 214...Brashear, H.R., and Heilman, K.M. Pure word deafness after bilateral primary auditory cortex infarcts. Neuroiogy 34: 347 -352, 1984. Cranford, J.L., Stream

  16. Cortical Activation during Attention to Sound in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Funabiki, Yasuko; Murai, Toshiya; Toichi, Motomi

    2012-01-01

    Individuals with autism spectrum disorders (ASDs) can demonstrate hypersensitivity to sounds as well as a lack of awareness of them. Several functional imaging studies have suggested an abnormal response in the auditory cortex of such subjects, but it is not known whether these subjects have dysfunction in the auditory cortex or are simply not…

  17. Retrosplenial Cortex Is Required for the Retrieval of Remote Memory for Auditory Cues

    ERIC Educational Resources Information Center

    Todd, Travis P.; Mehlman, Max L.; Keene, Christopher S.; DeAngeli, Nicole E.; Bucci, David J.

    2016-01-01

    The retrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of…

  18. Neural Representation of Scale Illusion: Magnetoencephalographic Study on the Auditory Illusion Induced by Distinctive Tone Sequences in the Two Ears

    PubMed Central

    Kuriki, Shinya; Yokosawa, Koichi; Takahashi, Makoto

    2013-01-01

    The auditory illusory perception “scale illusion” occurs when a tone of ascending scale is presented in one ear, a tone of descending scale is presented simultaneously in the other ear, and vice versa. Most listeners hear illusory percepts of smooth pitch contours of the higher half of the scale in the right ear and the lower half in the left ear. Little is known about neural processes underlying the scale illusion. In this magnetoencephalographic study, we recorded steady-state responses to amplitude-modulated short tones having illusion-inducing pitch sequences, where the sound level of the modulated tones was manipulated to decrease monotonically with increase in pitch. The steady-state responses were decomposed into right- and left-sound components by means of separate modulation frequencies. It was found that the time course of the magnitude of response components of illusion-perceiving listeners was significantly correlated with smooth pitch contour of illusory percepts and that the time course of response components of stimulus-perceiving listeners was significantly correlated with discontinuous pitch contour of stimulus percepts in addition to the contour of illusory percepts. The results suggest that the percept of illusory pitch sequence was represented in the neural activity in or near the primary auditory cortex, i.e., the site of generation of auditory steady-state response, and that perception of scale illusion is maintained by automatic low-level processing. PMID:24086676

  19. Passive stimulation and behavioral training differentially transform temporal processing in the inferior colliculus and primary auditory cortex

    PubMed Central

    Beitel, Ralph E.; Schreiner, Christoph E.; Leake, Patricia A.

    2016-01-01

    In profoundly deaf cats, behavioral training with intracochlear electric stimulation (ICES) can improve temporal processing in the primary auditory cortex (AI). To investigate whether similar effects are manifest in the auditory midbrain, ICES was initiated in neonatally deafened cats either during development after short durations of deafness (8 wk of age) or in adulthood after long durations of deafness (≥3.5 yr). All of these animals received behaviorally meaningless, “passive” ICES. Some animals also received behavioral training with ICES. Two long-deaf cats received no ICES prior to acute electrophysiological recording. After several months of passive ICES and behavioral training, animals were anesthetized, and neuronal responses to pulse trains of increasing rates were recorded in the central (ICC) and external (ICX) nuclei of the inferior colliculus. Neuronal temporal response patterns (repetition rate coding, minimum latencies, response precision) were compared with results from recordings made in the AI of the same animals (Beitel RE, Vollmer M, Raggio MW, Schreiner CE. J Neurophysiol 106: 944–959, 2011; Vollmer M, Beitel RE. J Neurophysiol 106: 2423–2436, 2011). Passive ICES in long-deaf cats remediated severely degraded temporal processing in the ICC and had no effects in the ICX. In contrast to observations in the AI, behaviorally relevant ICES had no effects on temporal processing in the ICC or ICX, with the single exception of shorter latencies in the ICC in short-deaf cats. The results suggest that independent of deafness duration passive stimulation and behavioral training differentially transform temporal processing in auditory midbrain and cortex, and primary auditory cortex emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf cat. NEW & NOTEWORTHY Behaviorally relevant vs. passive electric stimulation of the auditory nerve differentially affects neuronal temporal processing in the central nucleus of the inferior colliculus (ICC) and the primary auditory cortex (AI) in profoundly short-deaf and long-deaf cats. Temporal plasticity in the ICC depends on a critical amount of electric stimulation, independent of its behavioral relevance. In contrast, the AI emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf auditory system. PMID:27733594

  20. Connectional Modularity of Top-Down and Bottom-Up Multimodal Inputs to the Lateral Cortex of the Mouse Inferior Colliculus

    PubMed Central

    Lesicko, Alexandria M.H.; Hristova, Teodora S.; Maigler, Kathleen C.

    2016-01-01

    The lateral cortex of the inferior colliculus receives information from both auditory and somatosensory structures and is thought to play a role in multisensory integration. Previous studies in the rat have shown that this nucleus contains a series of distinct anatomical modules that stain for GAD-67 as well as other neurochemical markers. In the present study, we sought to better characterize these modules in the mouse inferior colliculus and determine whether the connectivity of other neural structures with the lateral cortex is spatially related to the distribution of these neurochemical modules. Staining for GAD-67 and other markers revealed a single modular network throughout the rostrocaudal extent of the mouse lateral cortex. Somatosensory inputs from the somatosensory cortex and dorsal column nuclei were found to terminate almost exclusively within these modular zones. However, projections from the auditory cortex and central nucleus of the inferior colliculus formed patches that interdigitate with the GAD-67-positive modules. These results suggest that the lateral cortex of the mouse inferior colliculus exhibits connectional as well as neurochemical modularity and may contain multiple segregated processing streams. This finding is discussed in the context of other brain structures in which neuroanatomical and connectional modularity have functional consequences. SIGNIFICANCE STATEMENT Many brain regions contain subnuclear microarchitectures, such as the matrix-striosome organization of the basal ganglia or the patch-interpatch organization of the visual cortex, that shed light on circuit complexities. In the present study, we demonstrate the presence of one such micro-organization in the rodent inferior colliculus. While this structure is typically viewed as an auditory integration center, its lateral cortex appears to be involved in multisensory operations and receives input from somatosensory brain regions. We show here that the lateral cortex can be further subdivided into multiple processing streams: modular regions, which are targeted by somatosensory inputs, and extramodular zones that receive auditory information. PMID:27798184

  1. Thresholding of auditory cortical representation by background noise

    PubMed Central

    Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029

  2. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis

    PubMed Central

    Evans, Samuel; Davis, Matthew H.

    2015-01-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. PMID:26157026

  3. Changes in pitch height elicit both language universal and language dependent changes in neural representation of pitch in the brainstem and auditory cortex

    PubMed Central

    Krishnan, Ananthanarayan; Suresh, Chandan H.; Gandour, Jackson T.

    2017-01-01

    Language experience shapes encoding of pitch-relevant information at both brainstem and cortical levels of processing. Pitch height is a salient dimension that orders pitch from low to high. Herein we investigate the effects of language experience (Chinese, English) in the brainstem and cortex on i) neural responses to variations in pitch height, ii) presence of asymmetry in cortical pitch representation, and iii) patterns of relative changes in magnitude of pitch height between these two levels of brain structure. Stimuli were three nonspeech homologs of Mandarin Tone 2 varying in pitch height only. The frequency-following response (FFR) and the cortical pitch-specific response (CPR) were recorded concurrently. At the Fz-linked T7/T8 site, peak latency of Na, Pb, and Nb decreased with increasing pitch height for both groups. Peak-to-peak amplitude of Na–Pb and Pb–Nb increased with increasing pitch height across groups. A language-dependent effect was restricted to Na-Pb; the Chinese had larger amplitude than the English group. At temporal sites (T7/T8), the Chinese group had larger amplitude, as compared to English, across stimuli, but also limited to the Na-Pb component and right temporal site. In the brainstem, F0 magnitude decreased with increasing pitch height; Chinese had larger magnitude across stimuli. A comparison of CPR and FFR responses revealed distinct patterns of relative changes in magnitude common to both groups. CPR amplitude increased and FFR amplitude decreased with increasing pitch height. Experience-dependent effects on CPR components vary as a function of neural sensitivity to pitch height within a particular temporal window (Na–Pb). Differences between the auditory brainstem and cortex imply distinct neural mechanisms for pitch extraction at both levels of brain structure. PMID:28108254

  4. Changes in pitch height elicit both language-universal and language-dependent changes in neural representation of pitch in the brainstem and auditory cortex.

    PubMed

    Krishnan, Ananthanarayan; Suresh, Chandan H; Gandour, Jackson T

    2017-03-27

    Language experience shapes encoding of pitch-relevant information at both brainstem and cortical levels of processing. Pitch height is a salient dimension that orders pitch from low to high. Herein we investigate the effects of language experience (Chinese, English) in the brainstem and cortex on (i) neural responses to variations in pitch height, (ii) presence of asymmetry in cortical pitch representation, and (iii) patterns of relative changes in magnitude of pitch height between these two levels of brain structure. Stimuli were three nonspeech homologs of Mandarin Tone 2 varying in pitch height only. The frequency-following response (FFR) and the cortical pitch-specific response (CPR) were recorded concurrently. At the Fz-linked T7/T8 site, peak latency of Na, Pb, and Nb decreased with increasing pitch height for both groups. Peak-to-peak amplitude of Na-Pb and Pb-Nb increased with increasing pitch height across groups. A language-dependent effect was restricted to Na-Pb; the Chinese had larger amplitude than the English group. At temporal sites (T7/T8), the Chinese group had larger amplitude, as compared to English, across stimuli, but also limited to the Na-Pb component and right temporal site. In the brainstem, F0 magnitude decreased with increasing pitch height; Chinese had larger magnitude across stimuli. A comparison of CPR and FFR responses revealed distinct patterns of relative changes in magnitude common to both groups. CPR amplitude increased and FFR amplitude decreased with increasing pitch height. Experience-dependent effects on CPR components vary as a function of neural sensitivity to pitch height within a particular temporal window (Na-Pb). Differences between the auditory brainstem and cortex imply distinct neural mechanisms for pitch extraction at both levels of brain structure. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Listening to Another Sense: Somatosensory Integration in the Auditory System

    PubMed Central

    Wu, Calvin; Stefanescu, Roxana A.; Martel, David T.

    2014-01-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems, and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body, and the auditory cortex. In this review, we explore the process of multisensory integration from 1) anatomical (inputs and connections), 2) physiological (cellular responses), 3) functional, and 4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing, and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698

  6. The spectrotemporal filter mechanism of auditory selective attention

    PubMed Central

    Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.

    2013-01-01

    SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126

  7. Effect of the environment on the dendritic morphology of the rat auditory cortex

    PubMed Central

    Bose, Mitali; Muñoz-Llancao, Pablo; Roychowdhury, Swagata; Nichols, Justin A.; Jakkamsetti, Vikram; Porter, Benjamin; Byrapureddy, Rajasekhar; Salgado, Humberto; Kilgard, Michael P.; Aboitiz, Francisco; Dagnino-Subiabre, Alexies; Atzori, Marco

    2010-01-01

    The present study aimed to identify morphological correlates of environment-induced changes at excitatory synapses of the primary auditory cortex (A1). We used the Golgi-Cox stain technique to compare pyramidal cells dendritic properties of Sprague-Dawley rats exposed to different environmental manipulations. Sholl analysis, dendritic length measures, and spine density counts were used to monitor the effects of sensory deafness and an auditory version of environmental enrichment (EE). We found that deafness decreased apical dendritic length leaving basal dendritic length unchanged, whereas EE selectively increased basal dendritic length without changing apical dendritic length. On the contrary, deafness decreased while EE increased spine density in both basal and apical dendrites of A1 layer 2/3 (LII/III) neurons. To determine whether stress contributed to the observed morphological changes in A1, we studied neural morphology in a restraint-induced model that lacked behaviorally relevant acoustic cues. We found that stress selectively decreased apical dendritic length in the auditory but not in the visual primary cortex. Similar to the acoustic manipulation, stress-induced changes in dendritic length possessed a layer specific pattern displaying LII/III neurons from stressed animals with normal apical dendrites but shorter basal dendrites, while infragranular neurons (layers V and VI) displayed shorter apical dendrites but normal basal dendrites. The same treatment did not induce similar changes in the visual cortex, demonstrating that the auditory cortex is an exquisitely sensitive target of neocortical plasticity, and that prolonged exposure to different acoustic as well as emotional environmental manipulation may produce specific changes in dendritic shape and spine density. PMID:19771593

  8. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    PubMed

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  9. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey.

    PubMed

    Scott, Brian H; Leccese, Paul A; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Mullarkey, Matthew P; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C

    2017-01-01

    In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  10. Long-term Administration of Salicylate-induced Changes in BDNF Expression and CREB Phosphorylation in the Auditory Cortex of Rats

    PubMed Central

    Yi, Bin; Wu, Cong; Shi, Runjie; Han, Kun; Sheng, Haibin; Li, Bei; Mei, Ling; Wang, Xueling; Huang, Zhiwu; Wu, Hao

    2018-01-01

    Hypothesis: We investigated whether salicylate induces tinnitus through alteration of the expression levels of brain-derived neurotrophic factor (BDNF), proBDNF, tyrosine kinase receptor B (TrkB), cAMP-responsive element-binding protein (CREB), and phosphorylated CREB (p-CREB) in the auditory cortex (AC). Background: Salicylate medication is frequently used for long-term treatment in clinical settings, but it may cause reversible tinnitus. Salicylate-induced tinnitus is associated with changes related to central auditory neuroplasticity. Our previous studies revealed enhanced neural activity and ultrastructural synaptic changes in the central auditory system after long-term salicylate administration. However, the underlying mechanisms remained unclear. Methods: Salicylate-induced tinnitus-like behavior in rats was confirmed using gap prepulse inhibition of acoustic startle and prepulse inhibition testing, followed by comparison of the expression levels of BDNF, proBDNF, TrkB, CREB, and p-CREB. Synaptic ultrastructure was observed under a transmission electron microscope. Results: BDNF and p-CREB were upregulated along with ultrastructural changes at the synapses in the AC of rats treated chronically with salicylate (p < 0.05, compared with control group). These changes returned to normal after 14 days of recovery (p > 0.05). Conclusion: Long-term administration of salicylate increased BDNF expression and CREB activation, upregulated synaptic efficacy, and changed synaptic ultrastructure in the AC. There may be a relationship between these factors and the mechanism of tinnitus. PMID:29342042

  11. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence.

    PubMed

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective.

  12. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    PubMed Central

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  13. Sound Sequence Discrimination Learning Motivated by Reward Requires Dopaminergic D2 Receptor Activation in the Rat Auditory Cortex

    ERIC Educational Resources Information Center

    Kudoh, Masaharu; Shibuki, Katsuei

    2006-01-01

    We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…

  14. Spectral and Temporal Processing in Rat Posterior Auditory Cortex

    PubMed Central

    Pandya, Pritesh K.; Rathbun, Daniel L.; Moucha, Raluca; Engineer, Navzer D.; Kilgard, Michael P.

    2009-01-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex. PMID:17615251

  15. Effects of sound intensity on temporal properties of inhibition in the pallid bat auditory cortex.

    PubMed

    Razak, Khaleel A

    2013-01-01

    Auditory neurons in bats that use frequency modulated (FM) sweeps for echolocation are selective for the behaviorally-relevant rates and direction of frequency change. Such selectivity arises through spectrotemporal interactions between excitatory and inhibitory components of the receptive field. In the pallid bat auditory system, the relationship between FM sweep direction/rate selectivity and spectral and temporal properties of sideband inhibition have been characterized. Of note is the temporal asymmetry in sideband inhibition, with low-frequency inhibition (LFI) exhibiting faster arrival times compared to high-frequency inhibition (HFI). Using the two-tone inhibition over time (TTI) stimulus paradigm, this study investigated the interactions between two sound parameters in shaping sideband inhibition: intensity and time. Specifically, the impact of changing relative intensities of the excitatory and inhibitory tones on arrival time of inhibition was studied. Using this stimulation paradigm, single unit data from the auditory cortex of pentobarbital-anesthetized cortex show that the threshold for LFI is on average ~8 dB lower than HFI. For equal intensity tones near threshold, LFI is stronger than HFI. When the inhibitory tone intensity is increased further from threshold, the strength asymmetry decreased. The temporal asymmetry in LFI vs. HFI arrival time is strongest when the excitatory and inhibitory tones are of equal intensities or if excitatory tone is louder. As inhibitory tone intensity is increased, temporal asymmetry decreased suggesting that the relative magnitude of excitatory and inhibitory inputs shape arrival time of inhibition and FM sweep rate and direction selectivity. Given that most FM bats use downward sweeps as echolocation calls, a similar asymmetry in threshold and strength of LFI vs. HFI may be a general adaptation to enhance direction selectivity while maintaining sweep-rate selective responses to downward sweeps.

  16. Right Occipital Cortex Activation Correlates with Superior Odor Processing Performance in the Early Blind

    PubMed Central

    Grandin, Cécile B.; Dricot, Laurence; Plaza, Paula; Lerens, Elodie; Rombaux, Philippe; De Volder, Anne G.

    2013-01-01

    Using functional magnetic resonance imaging (fMRI) in ten early blind humans, we found robust occipital activation during two odor-processing tasks (discrimination or categorization of fruit and flower odors), as well as during control auditory-verbal conditions (discrimination or categorization of fruit and flower names). We also found evidence for reorganization and specialization of the ventral part of the occipital cortex, with dissociation according to stimulus modality: the right fusiform gyrus was most activated during olfactory conditions while part of the left ventral lateral occipital complex showed a preference for auditory-verbal processing. Only little occipital activation was found in sighted subjects, but the same right-olfactory/left-auditory-verbal hemispheric lateralization was found overall in their brain. This difference between the groups was mirrored by superior performance of the blind in various odor-processing tasks. Moreover, the level of right fusiform gyrus activation during the olfactory conditions was highly correlated with individual scores in a variety of odor recognition tests, indicating that the additional occipital activation may play a functional role in odor processing. PMID:23967263

  17. Cerebral Processing of Voice Gender Studied Using a Continuous Carryover fMRI Design

    PubMed Central

    Pernet, Cyril; Latinus, Marianne; Crabbe, Frances; Belin, Pascal

    2013-01-01

    Normal listeners effortlessly determine a person's gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception. PMID:22490550

  18. PTEN regulation of local and long-range connections in mouse auditory cortex

    PubMed Central

    Xiong, Qiaojie; Oviedo, Hysell V; Trotman, Lloyd C; Zador, Anthony M

    2012-01-01

    Autism Spectrum Disorders (ASDs) are highly heritable developmental disorders caused by a heterogeneous collection of genetic lesions. Here we use a mouse model to study the effect on cortical connectivity of disrupting the ASD candidate gene PTEN. Through Cre-mediated recombination we conditionally knocked out PTEN expression in a subset of auditory cortical neurons. Analysis of long range connectivity using channelrhodopsin-2 (ChR2) revealed that the strength of synaptic inputs from both the contralateral auditory cortex and from the thalamus onto PTEN-cko neurons was enhanced compared with nearby neurons with normal PTEN expression. Laser scanning photostimulation (LSPS) showed that local inputs onto PTEN-cko neurons in the auditory cortex were similarly enhanced. The hyperconnectivity caused by PTEN-cko could be blocked by rapamycin, a specific inhibitor of the PTEN downstream molecule mTORC1. Together our results suggest that local and long-range hyperconnectivity may constitute a physiological basis for the effects of mutations in PTEN and possibly other ASD candidate genes. PMID:22302806

  19. Early Seizures Prematurely Unsilence Auditory Synapses to Disrupt Thalamocortical Critical Period Plasticity.

    PubMed

    Sun, Hongyu; Takesian, Anne E; Wang, Ting Ting; Lippman-Bell, Jocelyn J; Hensch, Takao K; Jensen, Frances E

    2018-05-29

    Heightened neural excitability in infancy and childhood results in increased susceptibility to seizures. Such early-life seizures are associated with language deficits and autism that can result from aberrant development of the auditory cortex. Here, we show that early-life seizures disrupt a critical period (CP) for tonotopic map plasticity in primary auditory cortex (A1). We show that this CP is characterized by a prevalence of "silent," NMDA-receptor (NMDAR)-only, glutamate receptor synapses in auditory cortex that become "unsilenced" due to activity-dependent AMPA receptor (AMPAR) insertion. Induction of seizures prior to this CP occludes tonotopic map plasticity by prematurely unsilencing NMDAR-only synapses. Further, brief treatment with the AMPAR antagonist NBQX following seizures, prior to the CP, prevents synapse unsilencing and permits subsequent A1 plasticity. These findings reveal that early-life seizures modify CP regulators and suggest that therapeutic targets for early post-seizure treatment can rescue CP plasticity. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Responses in Rat Core Auditory Cortex are Preserved during Sleep Spindle Oscillations

    PubMed Central

    Sela, Yaniv; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Tononi, Giulio; Nir, Yuval

    2016-01-01

    Study Objectives: Sleep is defined as a reversible state of reduction in sensory responsiveness and immobility. A long-standing hypothesis suggests that a high arousal threshold during non-rapid eye movement (NREM) sleep is mediated by sleep spindle oscillations, impairing thalamocortical transmission of incoming sensory stimuli. Here we set out to test this idea directly by examining sensory-evoked neuronal spiking activity during natural sleep. Methods: We compared neuronal (n = 269) and multiunit activity (MUA), as well as local field potentials (LFP) in rat core auditory cortex (A1) during NREM sleep, comparing responses to sounds depending on the presence or absence of sleep spindles. Results: We found that sleep spindles robustly modulated the timing of neuronal discharges in A1. However, responses to sounds were nearly identical for all measured signals including isolated neurons, MUA, and LFPs (all differences < 10%). Furthermore, in 10% of trials, auditory stimulation led to an early termination of the sleep spindle oscillation around 150–250 msec following stimulus onset. Finally, active ON states and inactive OFF periods during slow waves in NREM sleep affected the auditory response in opposite ways, depending on stimulus intensity. Conclusions: Responses in core auditory cortex are well preserved regardless of sleep spindles recorded in that area, suggesting that thalamocortical sensory relay remains functional during sleep spindles, and that sensory disconnection in sleep is mediated by other mechanisms. Citation: Sela Y, Vyazovskiy VV, Cirelli C, Tononi G, Nir Y. Responses in rat core auditory cortex are preserved during sleep spindle oscillations. SLEEP 2016;39(5):1069–1082. PMID:26856904

  1. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    PubMed

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia. Copyright © 2015 the authors 0270-6474/15/3514910-13$15.00/0.

  2. [Low level auditory skills compared to writing skills in school children attending third and fourth grade: evidence for the rapid auditory processing deficit theory?].

    PubMed

    Ptok, M; Meisen, R

    2008-01-01

    The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.

  3. Brain-wide maps of Fos expression during fear learning and recall.

    PubMed

    Cho, Jin-Hyung; Rendall, Sam D; Gray, Jesse M

    2017-04-01

    Fos induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which Fos induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide maps of Fos mRNA expression during auditory fear conditioning and recall in the setting of the home cage. These maps reveal a brain-wide pattern of Fos induction that is remarkably similar among fear conditioning, shock-only, tone-only, and fear recall conditions, casting doubt on the idea that Fos reveals auditory-specific sensory representations. Indeed, novel auditory tones lead to as much gene induction in visual as in auditory cortex, while familiar (nonconditioned) tones do not appreciably induce Fos anywhere in the brain. Fos expression levels do not correlate with physical activity, suggesting that they are not determined by behavioral activity-driven alterations in sensory experience. In the thalamus, Fos is induced more prominently in limbic than in sensory relay nuclei, suggesting that Fos may be most sensitive to emotional state. Thus, our data suggest that Fos expression during simple associative learning labels ensembles activated generally by arousal rather than specifically by a particular sensory cue. © 2017 Cho et al.; Published by Cold Spring Harbor Laboratory Press.

  4. Brain-wide maps of Fos expression during fear learning and recall

    PubMed Central

    Cho, Jin-Hyung; Rendall, Sam D.; Gray, Jesse M.

    2017-01-01

    Fos induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which Fos induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide maps of Fos mRNA expression during auditory fear conditioning and recall in the setting of the home cage. These maps reveal a brain-wide pattern of Fos induction that is remarkably similar among fear conditioning, shock-only, tone-only, and fear recall conditions, casting doubt on the idea that Fos reveals auditory-specific sensory representations. Indeed, novel auditory tones lead to as much gene induction in visual as in auditory cortex, while familiar (nonconditioned) tones do not appreciably induce Fos anywhere in the brain. Fos expression levels do not correlate with physical activity, suggesting that they are not determined by behavioral activity-driven alterations in sensory experience. In the thalamus, Fos is induced more prominently in limbic than in sensory relay nuclei, suggesting that Fos may be most sensitive to emotional state. Thus, our data suggest that Fos expression during simple associative learning labels ensembles activated generally by arousal rather than specifically by a particular sensory cue. PMID:28331016

  5. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  6. Putative mechanisms mediating tolerance for audiovisual stimulus onset asynchrony.

    PubMed

    Bhat, Jyoti; Miller, Lee M; Pitt, Mark A; Shahin, Antoine J

    2015-03-01

    Audiovisual (AV) speech perception is robust to temporal asynchronies between visual and auditory stimuli. We investigated the neural mechanisms that facilitate tolerance for audiovisual stimulus onset asynchrony (AVOA) with EEG. Individuals were presented with AV words that were asynchronous in onsets of voice and mouth movement and judged whether they were synchronous or not. Behaviorally, individuals tolerated (perceived as synchronous) longer AVOAs when mouth movement preceded the speech (V-A) stimuli than when the speech preceded mouth movement (A-V). Neurophysiologically, the P1-N1-P2 auditory evoked potentials (AEPs), time-locked to sound onsets and known to arise in and surrounding the primary auditory cortex (PAC), were smaller for the in-sync than the out-of-sync percepts. Spectral power of oscillatory activity in the beta band (14-30 Hz) following the AEPs was larger during the in-sync than out-of-sync perception for both A-V and V-A conditions. However, alpha power (8-14 Hz), also following AEPs, was larger for the in-sync than out-of-sync percepts only in the V-A condition. These results demonstrate that AVOA tolerance is enhanced by inhibiting low-level auditory activity (e.g., AEPs representing generators in and surrounding PAC) that code for acoustic onsets. By reducing sensitivity to acoustic onsets, visual-to-auditory onset mapping is weakened, allowing for greater AVOA tolerance. In contrast, beta and alpha results suggest the involvement of higher-level neural processes that may code for language cues (phonetic, lexical), selective attention, and binding of AV percepts, allowing for wider neural windows of temporal integration, i.e., greater AVOA tolerance. Copyright © 2015 the American Physiological Society.

  7. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  8. Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.

    ERIC Educational Resources Information Center

    Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.

    1999-01-01

    A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)

  9. Auditory motion processing after early blindness

    PubMed Central

    Jiang, Fang; Stecker, G. Christopher; Fine, Ione

    2014-01-01

    Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368

  10. Enhanced attention-dependent activity in the auditory cortex of older musicians.

    PubMed

    Zendel, Benjamin Rich; Alain, Claude

    2014-01-01

    Musical training improves auditory processing abilities, which correlates with neuro-plastic changes in exogenous (input-driven) and endogenous (attention-dependent) components of auditory event-related potentials (ERPs). Evidence suggests that musicians, compared to non-musicians, experience less age-related decline in auditory processing abilities. Here, we investigated whether lifelong musicianship mitigates exogenous or endogenous processing by measuring auditory ERPs in younger and older musicians and non-musicians while they either attended to auditory stimuli or watched a muted subtitled movie of their choice. Both age and musical training-related differences were observed in the exogenous components; however, the differences between musicians and non-musicians were similar across the lifespan. These results suggest that exogenous auditory ERPs are enhanced in musicians, but decline with age at the same rate. On the other hand, attention-related activity, modeled in the right auditory cortex using a discrete spatiotemporal source analysis, was selectively enhanced in older musicians. This suggests that older musicians use a compensatory strategy to overcome age-related decline in peripheral and exogenous processing of acoustic information. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Temporal lobe networks supporting the comprehension of spoken words.

    PubMed

    Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius

    2017-09-01

    Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing.

    PubMed

    Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin

    2014-08-15

    Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Auditory evoked functions in ground crew working in high noise environment of Mumbai airport.

    PubMed

    Thakur, L; Anand, J P; Banerjee, P K

    2004-10-01

    The continuous exposure to the relatively high level of noise in the surroundings of an airport is likely to affect the central pathway of the auditory system as well as the cognitive functions of the people working in that environment. The Brainstem Auditory Evoked Responses (BAER), Mid Latency Response (MLR) and P300 response of the ground crew employees working in Mumbai airport were studied to evaluate the effects of continuous exposure to high level of noise of the surroundings of the airport on these responses. BAER, P300 and MLR were recorded by using a Nicolet Compact-4 (USA) instrument. Audiometry was also monitored with the help of GSI-16 Audiometer. There was a significant increase in the peak III latency of the BAER in the subjects exposed to noise compared to controls with no change in their P300 values. The exposed group showed hearing loss at different frequencies. The exposure to the high level of noise caused a considerable decline in the auditory conduction upto the level of the brainstem with no significant change in conduction in the midbrain, subcortical areas, auditory cortex and associated areas. There was also no significant change in cognitive function as measured by P300 response.

  14. Specialization of the auditory system for the processing of bio-sonar information in the frequency domain: Mustached bats.

    PubMed

    Suga, Nobuo

    2018-04-01

    For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy.

    PubMed

    Pollonini, Luca; Olds, Cristen; Abaya, Homer; Bortfeld, Heather; Beauchamp, Michael S; Oghalai, John S

    2014-03-01

    The primary goal of most cochlear implant procedures is to improve a patient's ability to discriminate speech. To accomplish this, cochlear implants are programmed so as to maximize speech understanding. However, programming a cochlear implant can be an iterative, labor-intensive process that takes place over months. In this study, we sought to determine whether functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging method which is safe to use repeatedly and for extended periods of time, can provide an objective measure of whether a subject is hearing normal speech or distorted speech. We used a 140 channel fNIRS system to measure activation within the auditory cortex in 19 normal hearing subjects while they listed to speech with different levels of intelligibility. Custom software was developed to analyze the data and compute topographic maps from the measured changes in oxyhemoglobin and deoxyhemoglobin concentration. Normal speech reliably evoked the strongest responses within the auditory cortex. Distorted speech produced less region-specific cortical activation. Environmental sounds were used as a control, and they produced the least cortical activation. These data collected using fNIRS are consistent with the fMRI literature and thus demonstrate the feasibility of using this technique to objectively detect differences in cortical responses to speech of different intelligibility. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    PubMed

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self-exciting system is a key element for qualitatively reproducing A1 population activity and to understand the underlying mechanisms. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    PubMed

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex. Copyright © 2017 Millman et al.

  18. The Essential Complexity of Auditory Receptive Fields

    PubMed Central

    Thorson, Ivar L.; Liénard, Jean; David, Stephen V.

    2015-01-01

    Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490

  19. Intracerebral evidence of rhythm transform in the human auditory cortex.

    PubMed

    Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis

    2017-07-01

    Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (<30 Hz) and high-frequency (>30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.

  20. Neural activity related to discrimination and vocal production of consonant and dissonant musical intervals.

    PubMed

    González-García, Nadia; González, Martha A; Rendón, Pablo L

    2016-07-15

    Relationships between musical pitches are described as either consonant, when associated with a pleasant and harmonious sensation, or dissonant, when associated with an inharmonious feeling. The accurate singing of musical intervals requires communication between auditory feedback processing and vocal motor control (i.e. audio-vocal integration) to ensure that each note is produced correctly. The objective of this study is to investigate the neural mechanisms through which trained musicians produce consonant and dissonant intervals. We utilized 4 musical intervals (specifically, an octave, a major seventh, a fifth, and a tritone) as the main stimuli for auditory discrimination testing, and we used the same interval tasks to assess vocal accuracy in a group of musicians (11 subjects, all female vocal students at conservatory level). The intervals were chosen so as to test for differences in recognition and production of consonant and dissonant intervals, as well as narrow and wide intervals. The subjects were studied using fMRI during performance of the interval tasks; the control condition consisted of passive listening. Singing dissonant intervals as opposed to singing consonant intervals led to an increase in activation in several regions, most notably the primary auditory cortex, the primary somatosensory cortex, the amygdala, the left putamen, and the right insula. Singing wide intervals as opposed to singing narrow intervals resulted in the activation of the right anterior insula. Moreover, we also observed a correlation between singing in tune and brain activity in the premotor cortex, and a positive correlation between training and activation of primary somatosensory cortex, primary motor cortex, and premotor cortex during singing. When singing dissonant intervals, a higher degree of training correlated with the right thalamus and the left putamen. Our results indicate that singing dissonant intervals requires greater involvement of neural mechanisms associated with integrating external feedback from auditory and sensorimotor systems than singing consonant intervals, and it would then seem likely that dissonant intervals are intoned by adjusting the neural mechanisms used for the production of consonant intervals. Singing wide intervals requires a greater degree of control than singing narrow intervals, as it involves neural mechanisms which again involve the integration of internal and external feedback. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Multivariate sensitivity to voice during auditory categorization.

    PubMed

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  2. Neurochemical changes in the pericalcarine cortex in congenital blindness attributable to bilateral anophthalmia

    PubMed Central

    Coullon, Gaelle S. L.; Emir, Uzay E.; Fine, Ione; Watkins, Kate E.

    2015-01-01

    Congenital blindness leads to large-scale functional and structural reorganization in the occipital cortex, but relatively little is known about the neurochemical changes underlying this cross-modal plasticity. To investigate the effect of complete and early visual deafferentation on the concentration of metabolites in the pericalcarine cortex, 1H magnetic resonance spectroscopy was performed in 14 sighted subjects and 5 subjects with bilateral anophthalmia, a condition in which both eyes fail to develop. In the pericalcarine cortex, where primary visual cortex is normally located, the proportion of gray matter was significantly greater, and levels of choline, glutamate, glutamine, myo-inositol, and total creatine were elevated in anophthalmic relative to sighted subjects. Anophthalmia had no effect on the structure or neurochemistry of a sensorimotor cortex control region. More gray matter, combined with high levels of choline and myo-inositol, resembles the profile of the cortex at birth and suggests that the lack of visual input from the eyes might have delayed or arrested the maturation of this cortical region. High levels of choline and glutamate/glutamine are consistent with enhanced excitatory circuits in the anophthalmic occipital cortex, which could reflect a shift toward enhanced plasticity or sensitivity that could in turn mediate or unmask cross-modal responses. Finally, it is possible that the change in function of the occipital cortex results in biochemical profiles that resemble those of auditory, language, or somatosensory cortex. PMID:26180125

  3. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    PubMed Central

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2012-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836

  4. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    PubMed

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Reorganization in processing of spectral and temporal input in the rat posterior auditory field induced by environmental enrichment

    PubMed Central

    Jakkamsetti, Vikram; Chang, Kevin Q.

    2012-01-01

    Environmental enrichment induces powerful changes in the adult cerebral cortex. Studies in primary sensory cortex have observed that environmental enrichment modulates neuronal response strength, selectivity, speed of response, and synchronization to rapid sensory input. Other reports suggest that nonprimary sensory fields are more plastic than primary sensory cortex. The consequences of environmental enrichment on information processing in nonprimary sensory cortex have yet to be studied. Here we examine physiological effects of enrichment in the posterior auditory field (PAF), a field distinguished from primary auditory cortex (A1) by wider receptive fields, slower response times, and a greater preference for slowly modulated sounds. Environmental enrichment induced a significant increase in spectral and temporal selectivity in PAF. PAF neurons exhibited narrower receptive fields and responded significantly faster and for a briefer period to sounds after enrichment. Enrichment increased time-locking to rapidly successive sensory input in PAF neurons. Compared with previous enrichment studies in A1, we observe a greater magnitude of reorganization in PAF after environmental enrichment. Along with other reports observing greater reorganization in nonprimary sensory cortex, our results in PAF suggest that nonprimary fields might have a greater capacity for reorganization compared with primary fields. PMID:22131375

  6. Tinnitus and hyperacusis involve hyperactivity and enhanced connectivity in auditory-limbic-arousal-cerebellar network

    PubMed Central

    Chen, Yu-Chen; Li, Xiaowei; Liu, Lijie; Wang, Jian; Lu, Chun-Qiang; Yang, Ming; Jiao, Yun; Zang, Feng-Chao; Radziwon, Kelly; Chen, Guang-Di; Sun, Wei; Krishnan Muthaiah, Vijaya Prakash; Salvi, Richard; Teng, Gao-Jun

    2015-01-01

    Hearing loss often triggers an inescapable buzz (tinnitus) and causes everyday sounds to become intolerably loud (hyperacusis), but exactly where and how this occurs in the brain is unknown. To identify the neural substrate for these debilitating disorders, we induced both tinnitus and hyperacusis with an ototoxic drug (salicylate) and used behavioral, electrophysiological, and functional magnetic resonance imaging (fMRI) techniques to identify the tinnitus–hyperacusis network. Salicylate depressed the neural output of the cochlea, but vigorously amplified sound-evoked neural responses in the amygdala, medial geniculate, and auditory cortex. Resting-state fMRI revealed hyperactivity in an auditory network composed of inferior colliculus, medial geniculate, and auditory cortex with side branches to cerebellum, amygdala, and reticular formation. Functional connectivity revealed enhanced coupling within the auditory network and segments of the auditory network and cerebellum, reticular formation, amygdala, and hippocampus. A testable model accounting for distress, arousal, and gating of tinnitus and hyperacusis is proposed. DOI: http://dx.doi.org/10.7554/eLife.06576.001 PMID:25962854

  7. How do neurons work together? Lessons from auditory cortex.

    PubMed

    Harris, Kenneth D; Bartho, Peter; Chadderton, Paul; Curto, Carina; de la Rocha, Jaime; Hollender, Liad; Itskov, Vladimir; Luczak, Artur; Marguet, Stephan L; Renart, Alfonso; Sakata, Shuzo

    2011-01-01

    Recordings of single neurons have yielded great insights into the way acoustic stimuli are represented in auditory cortex. However, any one neuron functions as part of a population whose combined activity underlies cortical information processing. Here we review some results obtained by recording simultaneously from auditory cortical populations and individual morphologically identified neurons, in urethane-anesthetized and unanesthetized passively listening rats. Auditory cortical populations produced structured activity patterns both in response to acoustic stimuli, and spontaneously without sensory input. Population spike time patterns were broadly conserved across multiple sensory stimuli and spontaneous events, exhibiting a generally conserved sequential organization lasting approximately 100 ms. Both spontaneous and evoked events exhibited sparse, spatially localized activity in layer 2/3 pyramidal cells, and densely distributed activity in larger layer 5 pyramidal cells and putative interneurons. Laminar propagation differed however, with spontaneous activity spreading upward from deep layers and slowly across columns, but sensory responses initiating in presumptive thalamorecipient layers, spreading rapidly across columns. In both unanesthetized and urethanized rats, global activity fluctuated between "desynchronized" state characterized by low amplitude, high-frequency local field potentials and a "synchronized" state of larger, lower-frequency waves. Computational studies suggested that responses could be predicted by a simple dynamical system model fitted to the spontaneous activity immediately preceding stimulus presentation. Fitting this model to the data yielded a nonlinear self-exciting system model in synchronized states and an approximately linear system in desynchronized states. We comment on the significance of these results for auditory cortical processing of acoustic and non-acoustic information. © 2010 Elsevier B.V. All rights reserved.

  8. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Tinnitus distress is linked to enhanced resting-state functional connectivity from the limbic system to the auditory cortex.

    PubMed

    Chen, Yu-Chen; Xia, Wenqing; Chen, Huiyou; Feng, Yuan; Xu, Jin-Jing; Gu, Jian-Ping; Salvi, Richard; Yin, Xindao

    2017-05-01

    The phantom sound of tinnitus is believed to be triggered by aberrant neural activity in the central auditory pathway, but since this debilitating condition is often associated with emotional distress and anxiety, these comorbidities likely arise from maladaptive functional connections to limbic structures such as the amygdala and hippocampus. To test this hypothesis, resting-state functional magnetic resonance imaging (fMRI) was used to identify aberrant effective connectivity of the amygdala and hippocampus in tinnitus patients and to determine the relationship with tinnitus characteristics. Chronic tinnitus patients (n = 26) and age-, sex-, and education-matched healthy controls (n = 23) were included. Both groups were comparable for hearing level. Granger causality analysis utilizing the amygdala and hippocampus as seed regions were used to investigate the directional connectivity and the relationship with tinnitus duration or distress. Relative to healthy controls, tinnitus patients demonstrated abnormal directional connectivity of the amygdala and hippocampus, including primary and association auditory cortex, and other non-auditory areas. Importantly, scores on the Tinnitus Handicap Questionnaires were positively correlated with increased connectivity from the left amygdala to left superior temporal gyrus (r = 0.570, P = 0.005), and from the right amygdala to right superior temporal gyrus (r = 0.487, P = 0.018). Moreover, enhanced effective connectivity from the right hippocampus to left transverse temporal gyrus was correlated with tinnitus duration (r = 0.452, P = 0.030). The results showed that tinnitus distress strongly correlates with enhanced effective connectivity that is directed from the amygdala to the auditory cortex. The longer the phantom sensation, the more likely acute tinnitus becomes permanently encoded by memory traces in the hippocampus. Hum Brain Mapp 38:2384-2397, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Temporal tuning in the bat auditory cortex is sharper when studied with natural echolocation sequences.

    PubMed

    Beetz, M Jerome; Hechavarría, Julio C; Kössl, Manfred

    2016-06-30

    Precise temporal coding is necessary for proper acoustic analysis. However, at cortical level, forward suppression appears to limit the ability of neurons to extract temporal information from natural sound sequences. Here we studied how temporal processing can be maintained in the bats' cortex in the presence of suppression evoked by natural echolocation streams that are relevant to the bats' behavior. We show that cortical neurons tuned to target-distance actually profit from forward suppression induced by natural echolocation sequences. These neurons can more precisely extract target distance information when they are stimulated with natural echolocation sequences than during stimulation with isolated call-echo pairs. We conclude that forward suppression does for time domain tuning what lateral inhibition does for selectivity forms such as auditory frequency tuning and visual orientation tuning. When talking about cortical processing, suppression should be seen as a mechanistic tool rather than a limiting element.

  11. Neuron discharges in the rat auditory cortex during electrical intracortical stimulation.

    PubMed

    Maldonado, P E; Altman, J A; Gerstein, G L

    1998-01-01

    Studies were carried out in rats anesthetized with ketamine or nembutal, with recording of multicellular activity (with separate identification of responses from individual neurons) in the primary auditory cortex before and after electrical intracortical microstimulation. These experiments showed that about half of the set of neurons studied produced responses to short tonal bursts, these responses having two components-initial discharges arising in response to the sound, and afterdischarge occurring after pauses of 50-100 msec. Afterdischarges lasted at least several seconds, and were generally characterized by a rhythmic structure (with a frequency of 8-12 Hz). After electrical microstimulation, the level of spike activity increased, especially in afterdischarges, and this increase could last up to 4 h. Combined peristimulus histograms, cross-correlations, and gravitational analyses were used to demonstrate interactions of neurons, which increased after electrical stimulation and were especially pronounced in the response afterdischarges.

  12. Mutism and auditory agnosia due to bilateral insular damage--role of the insula in human communication.

    PubMed

    Habib, M; Daquin, G; Milandre, L; Royere, M L; Rey, M; Lanteri, A; Salamon, G; Khalil, R

    1995-03-01

    We report a case of transient mutism and persistent auditory agnosia due to two successive ischemic infarcts mainly involving the insular cortex on both hemispheres. During the 'mutic' period, which lasted about 1 month, the patient did not respond to any auditory stimuli and made no effort to communicate. On follow-up examinations, language competences had re-appeared almost intact, but a massive auditory agnosia for non-verbal sounds was observed. From close inspection of lesion site, as determined with brain resonance imaging, and from a study of auditory evoked potentials, it is concluded that bilateral insular damage was crucial to both expressive and receptive components of the syndrome. The role of the insula in verbal and non-verbal communication is discussed in the light of anatomical descriptions of the pattern of connectivity of the insular cortex.

  13. Long-Lasting Crossmodal Cortical Reorganization Triggered by Brief Postnatal Visual Deprivation.

    PubMed

    Collignon, Olivier; Dormal, Giulia; de Heering, Adelaide; Lepore, Franco; Lewis, Terri L; Maurer, Daphne

    2015-09-21

    Animal and human studies have demonstrated that transient visual deprivation early in life, even for a very short period, permanently alters the response properties of neurons in the visual cortex and leads to corresponding behavioral visual deficits. While it is acknowledged that early-onset and longstanding blindness leads the occipital cortex to respond to non-visual stimulation, it remains unknown whether a short and transient period of postnatal visual deprivation is sufficient to trigger crossmodal reorganization that persists after years of visual experience. In the present study, we characterized brain responses to auditory stimuli in 11 adults who had been deprived of all patterned vision at birth by congenital cataracts in both eyes until they were treated at 9 to 238 days of age. When compared to controls with typical visual experience, the cataract-reversal group showed enhanced auditory-driven activity in focal visual regions. A combination of dynamic causal modeling with Bayesian model selection indicated that this auditory-driven activity in the occipital cortex was better explained by direct cortico-cortical connections with the primary auditory cortex than by subcortical connections. Thus, a short and transient period of visual deprivation early in life leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Articulatory movements modulate auditory responses to speech

    PubMed Central

    Agnew, Z.K.; McGettigan, C.; Banks, B.; Scott, S.K.

    2013-01-01

    Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior–posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information. PMID:22982103

  15. A Layer-specific Corticofugal Input to the Mouse Superior Colliculus.

    PubMed

    Zurita, Hector; Rock, Crystal; Perkins, Jessica; Apicella, Alfonso Junior

    2017-07-05

    In the auditory cortex (AC), corticofugal projections arise from each level of the auditory system and are considered to provide feedback "loops" important to modulate the flow of ascending information. It is well established that the cortex can influence the response of neurons in the superior colliculus (SC) via descending corticofugal projections. However, little is known about the relative contribution of different pyramidal neurons to these projections in the SC. We addressed this question by taking advantage of anterograde and retrograde neuronal tracing to directly examine the laminar distribution, long-range projections, and electrophysiological properties of pyramidal neurons projecting from the AC to the SC of the mouse brain. Here we show that layer 5 cortico-superior-collicular pyramidal neurons act as bandpass filters, resonating with a broad peak at ∼3 Hz, whereas layer 6 neurons act as low-pass filters. The dissimilar subthreshold properties of layer 5 and layer 6 cortico-superior-collicular pyramidal neurons can be described by differences in the hyperpolarization-activated cyclic nucleotide-gated cation h-current (Ih). Ih also reduced the summation of short trains of artificial excitatory postsynaptic potentials injected at the soma of layer 5, but not layer 6, cortico-superior-collicular pyramidal neurons, indicating a differential dampening effect of Ih on these neurons. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. The Neural Correlates of Hierarchical Predictions for Perceptual Decisions.

    PubMed

    Weilnhammer, Veith A; Stuke, Heiner; Sterzer, Philipp; Schmack, Katharina

    2018-05-23

    Sensory information is inherently noisy, sparse, and ambiguous. In contrast, visual experience is usually clear, detailed, and stable. Bayesian theories of perception resolve this discrepancy by assuming that prior knowledge about the causes underlying sensory stimulation actively shapes perceptual decisions. The CNS is believed to entertain a generative model aligned to dynamic changes in the hierarchical states of our volatile sensory environment. Here, we used model-based fMRI to study the neural correlates of the dynamic updating of hierarchically structured predictions in male and female human observers. We devised a crossmodal associative learning task with covertly interspersed ambiguous trials in which participants engaged in hierarchical learning based on changing contingencies between auditory cues and visual targets. By inverting a Bayesian model of perceptual inference, we estimated individual hierarchical predictions, which significantly biased perceptual decisions under ambiguity. Although "high-level" predictions about the cue-target contingency correlated with activity in supramodal regions such as orbitofrontal cortex and hippocampus, dynamic "low-level" predictions about the conditional target probabilities were associated with activity in retinotopic visual cortex. Our results suggest that our CNS updates distinct representations of hierarchical predictions that continuously affect perceptual decisions in a dynamically changing environment. SIGNIFICANCE STATEMENT Bayesian theories posit that our brain entertains a generative model to provide hierarchical predictions regarding the causes of sensory information. Here, we use behavioral modeling and fMRI to study the neural underpinnings of such hierarchical predictions. We show that "high-level" predictions about the strength of dynamic cue-target contingencies during crossmodal associative learning correlate with activity in orbitofrontal cortex and the hippocampus, whereas "low-level" conditional target probabilities were reflected in retinotopic visual cortex. Our findings empirically corroborate theorizations on the role of hierarchical predictions in visual perception and contribute substantially to a longstanding debate on the link between sensory predictions and orbitofrontal or hippocampal activity. Our work fundamentally advances the mechanistic understanding of perceptual inference in the human brain. Copyright © 2018 the authors 0270-6474/18/385008-14$15.00/0.

  17. Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".

    PubMed

    Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David

    2013-01-23

    Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.

  18. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis.

    PubMed

    Evans, Samuel; Davis, Matthew H

    2015-12-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. © The Author 2015. Published by Oxford University Press.

  19. Shaping the aging brain: role of auditory input patterns in the emergence of auditory cortical impairments

    PubMed Central

    Kamal, Brishna; Holman, Constance; de Villers-Sidani, Etienne

    2013-01-01

    Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function. PMID:24062649

  20. The Effect of Visual and Auditory Enhancements on Excitability of the Primary Motor Cortex during Motor Imagery: A Pilot Study

    ERIC Educational Resources Information Center

    Ikeda, Kohei; Higashi, Toshio; Sugawara, Kenichi; Tomori, Kounosuke; Kinoshita, Hiroshi; Kasai, Tatsuya

    2012-01-01

    The effect of visual and auditory enhancements of finger movement on corticospinal excitability during motor imagery (MI) was investigated using the transcranial magnetic stimulation technique. Motor-evoked potentials were elicited from the abductor digit minimi muscle during MI with auditory, visual and, auditory and visual information, and no…

  1. PTEN regulation of local and long-range connections in mouse auditory cortex.

    PubMed

    Xiong, Qiaojie; Oviedo, Hysell V; Trotman, Lloyd C; Zador, Anthony M

    2012-02-01

    Autism spectrum disorders (ASDs) are highly heritable developmental disorders caused by a heterogeneous collection of genetic lesions. Here we use a mouse model to study the effect on cortical connectivity of disrupting the ASD candidate gene PTEN (phosphatase and tensin homolog deleted on chromosome 10). Through Cre-mediated recombination, we conditionally knocked out PTEN expression in a subset of auditory cortical neurons. Analysis of long-range connectivity using channelrhodopsin-2 revealed that the strength of synaptic inputs from both the contralateral auditory cortex and from the thalamus onto PTEN-cko neurons was enhanced compared with nearby neurons with normal PTEN expression. Laser-scanning photostimulation showed that local inputs onto PTEN-cko neurons in the auditory cortex were similarly enhanced. The hyperconnectivity caused by PTEN-cko could be blocked by rapamycin, a specific inhibitor of the PTEN downstream molecule mammalian target of rapamycin complex 1. Together, our results suggest that local and long-range hyperconnectivity may constitute a physiological basis for the effects of mutations in PTEN and possibly other ASD candidate genes.

  2. Short-Term Memory for Space and Time Flexibly Recruit Complementary Sensory-Biased Frontal Lobe Attention Networks.

    PubMed

    Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C

    2015-08-19

    The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Echo-level compensation and delay tuning in the auditory cortex of the mustached bat.

    PubMed

    Macías, Silvio; Mora, Emanuel C; Hechavarría, Julio C; Kössl, Manfred

    2016-06-01

    During echolocation, bats continuously perform audio-motor adjustments to optimize detection efficiency. It has been demonstrated that bats adjust the amplitude of their biosonar vocalizations (known as 'pulses') to stabilize the amplitude of the returning echo. Here, we investigated this echo-level compensation behaviour by swinging mustached bats on a pendulum towards a reflective surface. In such a situation, the bats lower the amplitude of their emitted pulses to maintain the amplitude of incoming echoes at a constant level as they approach a target. We report that cortical auditory neurons that encode target distance have receptive fields that are optimized for dealing with echo-level compensation. In most cortical delay-tuned neurons, the echo amplitude eliciting the maximum response matches the echo amplitudes measured from the bats' biosonar vocalizations while they are swung in a pendulum. In addition, neurons tuned to short target distances are maximally responsive to low pulse amplitudes while neurons tuned to long target distances respond maximally to high pulse amplitudes. Our results suggest that bats dynamically adjust biosonar pulse amplitude to match the encoding of target range and to keep the amplitude of the returning echo within the bounds of the cortical map of echo delays. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans

    PubMed Central

    Poliva, Oren

    2017-01-01

    In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls. PMID:28928931

  5. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  6. Different Types of Laughter Modulate Connectivity within Distinct Parts of the Laughter Perception Network

    PubMed Central

    Ethofer, Thomas; Brück, Carolin; Alter, Kai; Grodd, Wolfgang; Kreifelts, Benjamin

    2013-01-01

    Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter. PMID:23667619

  7. Different types of laughter modulate connectivity within distinct parts of the laughter perception network.

    PubMed

    Wildgruber, Dirk; Szameitat, Diana P; Ethofer, Thomas; Brück, Carolin; Alter, Kai; Grodd, Wolfgang; Kreifelts, Benjamin

    2013-01-01

    Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter.

  8. Representation of Sound Categories in Auditory Cortical Maps

    ERIC Educational Resources Information Center

    Guenther, Frank H.; Nieto-Castanon, Alfonso; Ghosh, Satrajit S.; Tourville, Jason A.

    2004-01-01

    Functional magnetic resonance imaging (fMRI) was used to investigate the representation of sound categories in human auditory cortex. Experiment 1 investigated the representation of prototypical (good) and nonprototypical (bad) examples of a vowel sound. Listening to prototypical examples of a vowel resulted in less auditory cortical activation…

  9. Auditory cortex of newborn bats is prewired for echolocation.

    PubMed

    Kössl, Manfred; Voss, Cornelia; Mora, Emanuel C; Macias, Silvio; Foeller, Elisabeth; Vater, Marianne

    2012-04-10

    Neuronal computation of object distance from echo delay is an essential task that echolocating bats must master for spatial orientation and the capture of prey. In the dorsal auditory cortex of bats, neurons specifically respond to combinations of short frequency-modulated components of emitted call and delayed echo. These delay-tuned neurons are thought to serve in target range calculation. It is unknown whether neuronal correlates of active space perception are established by experience-dependent plasticity or by innate mechanisms. Here we demonstrate that in the first postnatal week, before onset of echolocation and flight, dorsal auditory cortex already contains functional circuits that calculate distance from the temporal separation of a simulated pulse and echo. This innate cortical implementation of a purely computational processing mechanism for sonar ranging should enhance survival of juvenile bats when they first engage in active echolocation behaviour and flight.

  10. The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    PubMed Central

    Sörös, Peter; Michael, Nikolaus; Tollkötter, Melanie; Pfleiderer, Bettina

    2006-01-01

    Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i) the amplitude of the N1m response and (ii) its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak) and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex. PMID:16884545

  11. Auditory Responses and Stimulus-Specific Adaptation in Rat Auditory Cortex are Preserved Across NREM and REM Sleep

    PubMed Central

    Nir, Yuval; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Banks, Matthew I.; Tononi, Giulio

    2015-01-01

    Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic “gate,” which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences <8% between states). The processing of deviant tones was also compared in sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13–20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas. PMID:24323498

  12. Frontal auditory evoked potentials and augmenting-reducing.

    PubMed

    Bruneau, N; Roux, S; Garreau, B; Lelord, G

    1985-09-01

    Auditory evoked potentials (AEPs) to tones (750 Hz--200 msec) ranging from 50 to 80 dB SPL were studied at Cz and Fz leads in 29 normal adults (15 males) ranging in age from 20 to 22. Peak-to-trough amplitudes were measured for the P1-N1 and the N1-P2 wave forms as well as baseline (500 msec prestimulus)-to-peak amplitudes for each component, i.e., P1, N1 and P2. Amplitudes were examined as a function of intensity and electrode location. Cz-Fz amplitude differences increased with increasing stimulus intensity, the differentiating peak being the N1 component. An overall reducing phenomenon was found at Fz in the 70-80 dB range whereas an augmenting effect was observed at Cz for these intensities. The augmenting/reducing groups defined by analysis of individual amplitude-intensity patterns were different whether we considered Fz or Cz results: Fz reducers were more numerous than Cz reducers. These results on prominent reducing at the frontal level were examined in relation to the data concerning the modulatory function of the frontal cortex on auditory EPs. Implications were drawn for the role of the frontal cortex in cortical augmenting-reducing.

  13. Auditory inhibitory gating in medial prefrontal cortex: Single unit and local field potential analysis.

    PubMed

    Mears, R P; Klein, A C; Cromwell, H C

    2006-08-11

    Medial prefrontal cortex is a crucial region involved in inhibitory processes. Damage to the medial prefrontal cortex can lead to loss of normal inhibitory control over motor, sensory, emotional and cognitive functions. The goal of the present study was to examine the basic properties of inhibitory gating in this brain region in rats. Inhibitory gating has recently been proposed as a neurophysiological assay for sensory filters in higher brain regions that potentially enable or disable information throughput. This perspective has important clinical relevance due to the findings that gating is dramatically impaired in individuals with emotional and cognitive impairments (i.e. schizophrenia). We used the standard inhibitory gating two-tone paradigm with a 500 ms interval between tones and a 10 s interval between tone pairs. We recorded both single unit and local field potentials from chronic microwire arrays implanted in the medial prefrontal cortex. We investigated short-term (within session) and long-term (between session) variability of auditory gating and additionally examined how altering the interval between the tones influenced the potency of the inhibition. The local field potentials displayed greater variability with a reduction in the amplitudes of the tone responses over both the short and long-term time windows. The decrease across sessions was most intense for the second tone response (test tone) leading to a more robust gating (lower T/C ratio). Surprisingly, single unit responses of different varieties retained similar levels of auditory responsiveness and inhibition in both the short and long-term analysis. Neural inhibition decreased monotonically related to the increase in intertone interval. This change in gating was most consistent in the local field potentials. Subsets of single unit responses did not show the lack of inhibition even for the longer intertone intervals tested (4 s interval). These findings support the idea that the medial prefrontal cortex is an important site where early inhibitory functions reside and potentially mediate psychological processes.

  14. Biased and unbiased perceptual decision-making on vocal emotions.

    PubMed

    Dricu, Mihai; Ceravolo, Leonardo; Grandjean, Didier; Frühholz, Sascha

    2017-11-24

    Perceptual decision-making on emotions involves gathering sensory information about the affective state of another person and forming a decision on the likelihood of a particular state. These perceptual decisions can be of varying complexity as determined by different contexts. We used functional magnetic resonance imaging and a region of interest approach to investigate the brain activation and functional connectivity behind two forms of perceptual decision-making. More complex unbiased decisions on affective voices recruited an extended bilateral network consisting of the posterior inferior frontal cortex, the orbitofrontal cortex, the amygdala, and voice-sensitive areas in the auditory cortex. Less complex biased decisions on affective voices distinctly recruited the right mid inferior frontal cortex, pointing to a functional distinction in this region following decisional requirements. Furthermore, task-induced neural connectivity revealed stronger connections between these frontal, auditory, and limbic regions during unbiased relative to biased decision-making on affective voices. Together, the data shows that different types of perceptual decision-making on auditory emotions have distinct patterns of activations and functional coupling that follow the decisional strategies and cognitive mechanisms involved during these perceptual decisions.

  15. Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study.

    PubMed

    Dykstra, Andrew R; Halgren, Eric; Gutschalk, Alexander; Eskandar, Emad N; Cash, Sydney S

    2016-01-01

    In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.

  16. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations

    PubMed Central

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. PMID:26545618

  17. There's more than one way to scan a cat: imaging cat auditory cortex with high-field fMRI using continuous or sparse sampling.

    PubMed

    Hall, Amee J; Brown, Trecia A; Grahn, Jessica A; Gati, Joseph S; Nixon, Pam L; Hughes, Sarah M; Menon, Ravi S; Lomber, Stephen G

    2014-03-15

    When conducting auditory investigations using functional magnetic resonance imaging (fMRI), there are inherent potential confounds that need to be considered. Traditional continuous fMRI acquisition methods produce sounds >90 dB which compete with stimuli or produce neural activation masking evoked activity. Sparse scanning methods insert a period of reduced MRI-related noise, between image acquisitions, in which a stimulus can be presented without competition. In this study, we compared sparse and continuous scanning methods to identify the optimal approach to investigate acoustically evoked cortical, thalamic and midbrain activity in the cat. Using a 7 T magnet, we presented broadband noise, 10 kHz tones, or 0.5 kHz tones in a block design, interleaved with blocks in which no stimulus was presented. Continuous scanning resulted in larger clusters of activation and more peak voxels within the auditory cortex. However, no significant activation was observed within the thalamus. Also, there was no significant difference found, between continuous or sparse scanning, in activations of midbrain structures. Higher magnitude activations were identified in auditory cortex compared to the midbrain using both continuous and sparse scanning. These results indicate that continuous scanning is the preferred method for investigations of auditory cortex in the cat using fMRI. Also, choice of method for future investigations of midbrain activity should be driven by other experimental factors, such as stimulus intensity and task performance during scanning. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Neurochemical changes in the pericalcarine cortex in congenital blindness attributable to bilateral anophthalmia.

    PubMed

    Coullon, Gaelle S L; Emir, Uzay E; Fine, Ione; Watkins, Kate E; Bridge, Holly

    2015-09-01

    Congenital blindness leads to large-scale functional and structural reorganization in the occipital cortex, but relatively little is known about the neurochemical changes underlying this cross-modal plasticity. To investigate the effect of complete and early visual deafferentation on the concentration of metabolites in the pericalcarine cortex, (1)H magnetic resonance spectroscopy was performed in 14 sighted subjects and 5 subjects with bilateral anophthalmia, a condition in which both eyes fail to develop. In the pericalcarine cortex, where primary visual cortex is normally located, the proportion of gray matter was significantly greater, and levels of choline, glutamate, glutamine, myo-inositol, and total creatine were elevated in anophthalmic relative to sighted subjects. Anophthalmia had no effect on the structure or neurochemistry of a sensorimotor cortex control region. More gray matter, combined with high levels of choline and myo-inositol, resembles the profile of the cortex at birth and suggests that the lack of visual input from the eyes might have delayed or arrested the maturation of this cortical region. High levels of choline and glutamate/glutamine are consistent with enhanced excitatory circuits in the anophthalmic occipital cortex, which could reflect a shift toward enhanced plasticity or sensitivity that could in turn mediate or unmask cross-modal responses. Finally, it is possible that the change in function of the occipital cortex results in biochemical profiles that resemble those of auditory, language, or somatosensory cortex. Copyright © 2015 the American Physiological Society.

  19. Changes in resting-state connectivity in musicians with embouchure dystonia.

    PubMed

    Haslinger, Bernhard; Noé, Jonas; Altenmüller, Eckart; Riedl, Valentin; Zimmer, Claus; Mantel, Tobias; Dresel, Christian

    2017-03-01

    Embouchure dystonia is a highly disabling task-specific dystonia in professional brass musicians leading to spasms of perioral muscles while playing the instrument. As they are asymptomatic at rest, resting-state functional magnetic resonance imaging in these patients can reveal changes in functional connectivity within and between brain networks independent from dystonic symptoms. We therefore compared embouchure dystonia patients to healthy musicians with resting-state functional magnetic resonance imaging in combination with independent component analyses. Patients showed increased functional connectivity of the bilateral sensorimotor mouth area and right secondary somatosensory cortex, but reduced functional connectivity of the bilateral sensorimotor hand representation, left inferior parietal cortex, and mesial premotor cortex within the lateral motor function network. Within the auditory function network, the functional connectivity of bilateral secondary auditory cortices, right posterior parietal cortex and left sensorimotor hand area was increased, the functional connectivity of right primary auditory cortex, right secondary somatosensory cortex, right sensorimotor mouth representation, bilateral thalamus, and anterior cingulate cortex was reduced. Negative functional connectivity between the cerebellar and lateral motor function network and positive functional connectivity between the cerebellar and primary visual network were reduced. Abnormal resting-state functional connectivity of sensorimotor representations of affected and unaffected body parts suggests a pathophysiological predisposition for abnormal sensorimotor and audiomotor integration in embouchure dystonia. Altered connectivity to the cerebellar network highlights the important role of the cerebellum in this disease. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  20. Encoding frequency contrast in primate auditory cortex

    PubMed Central

    Scott, Brian H.; Semple, Malcolm N.

    2014-01-01

    Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This “dynamic hyperacuity” suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing. PMID:24598525

  1. The laminar and temporal structure of stimulus information in the phase of field potentials of auditory cortex.

    PubMed

    Szymanski, Francois D; Rabinowitz, Neil C; Magri, Cesare; Panzeri, Stefano; Schnupp, Jan W H

    2011-11-02

    Recent studies have shown that the phase of low-frequency local field potentials (LFPs) in sensory cortices carries a significant amount of information about complex naturalistic stimuli, yet the laminar circuit mechanisms and the aspects of stimulus dynamics responsible for generating this phase information remain essentially unknown. Here we investigated these issues by means of an information theoretic analysis of LFPs and current source densities (CSDs) recorded with laminar multi-electrode arrays in the primary auditory area of anesthetized rats during complex acoustic stimulation (music and broadband 1/f stimuli). We found that most LFP phase information originated from discrete "CSD events" consisting of granular-superficial layer dipoles of short duration and large amplitude, which we hypothesize to be triggered by transient thalamocortical activation. These CSD events occurred at rates of 2-4 Hz during both stimulation with complex sounds and silence. During stimulation with complex sounds, these events reliably reset the LFP phases at specific times during the stimulation history. These facts suggest that the informativeness of LFP phase in rat auditory cortex is the result of transient, large-amplitude events, of the "evoked" or "driving" type, reflecting strong depolarization in thalamo-recipient layers of cortex. Finally, the CSD events were characterized by a small number of discrete types of infragranular activation. The extent to which infragranular regions were activated was stimulus dependent. These patterns of infragranular activations may reflect a categorical evaluation of stimulus episodes by the local circuit to determine whether to pass on stimulus information through the output layers.

  2. Auditory Temporal Acuity Probed With Cochlear Implant Stimulation and Cortical Recording

    PubMed Central

    Kirby, Alana E.

    2010-01-01

    Cochlear implants stimulate the auditory nerve with amplitude-modulated (AM) electric pulse trains. Pulse rates >2,000 pulses per second (pps) have been hypothesized to enhance transmission of temporal information. Recent studies, however, have shown that higher pulse rates impair phase locking to sinusoidal AM in the auditory cortex and impair perceptual modulation detection. Here, we investigated the effects of high pulse rates on the temporal acuity of transmission of pulse trains to the auditory cortex. In anesthetized guinea pigs, signal-detection analysis was used to measure the thresholds for detection of gaps in pulse trains at rates of 254, 1,017, and 4,069 pps and in acoustic noise. Gap-detection thresholds decreased by an order of magnitude with increases in pulse rate from 254 to 4,069 pps. Such a pulse-rate dependence would likely influence speech reception through clinical speech processors. To elucidate the neural mechanisms of gap detection, we measured recovery from forward masking after a 196.6-ms pulse train. Recovery from masking was faster at higher carrier pulse rates and masking increased linearly with current level. We fit the data with a dual-exponential recovery function, consistent with a peripheral and a more central process. High-rate pulse trains evoked less central masking, possibly due to adaptation of the response in the auditory nerve. Neither gap detection nor forward masking varied with cortical depth, indicating that these processes are likely subcortical. These results indicate that gap detection and modulation detection are mediated by two separate neural mechanisms. PMID:19923242

  3. Cooperative dynamics in auditory brain response

    NASA Astrophysics Data System (ADS)

    Kwapień, J.; DrożdŻ, S.; Liu, L. C.; Ioannides, A. A.

    1998-11-01

    Simultaneous estimates of activity in the left and right auditory cortex of five normal human subjects were extracted from multichannel magnetoencephalography recordings. Left, right, and binaural stimulations were used, in separate runs, for each subject. The resulting time series of left and right auditory cortex activity were analyzed using the concept of mutual information. The analysis constitutes an objective method to address the nature of interhemispheric correlations in response to auditory stimulations. The results provide clear evidence of the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20 ms, as can be seen in the average signal. The strength of the interhemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.

  4. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    PubMed Central

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  5. Dual Gamma Rhythm Generators Control Interlaminar Synchrony in Auditory Cortex

    PubMed Central

    Ainsworth, Matthew; Lee, Shane; Cunningham, Mark O.; Roopun, Anita K.; Traub, Roger D.; Kopell, Nancy J.; Whittington, Miles A.

    2013-01-01

    Rhythmic activity in populations of cortical neurons accompanies, and may underlie, many aspects of primary sensory processing and short-term memory. Activity in the gamma band (30 Hz up to > 100 Hz) is associated with such cognitive tasks and is thought to provide a substrate for temporal coupling of spatially separate regions of the brain. However, such coupling requires close matching of frequencies in co-active areas, and because the nominal gamma band is so spectrally broad, it may not constitute a single underlying process. Here we show that, for inhibition-based gamma rhythms in vitro in rat neocortical slices, mechanistically distinct local circuit generators exist in different laminae of rat primary auditory cortex. A persistent, 30 – 45 Hz, gap-junction-dependent gamma rhythm dominates rhythmic activity in supragranular layers 2/3, whereas a tonic depolarization-dependent, 50 – 80 Hz, pyramidal/interneuron gamma rhythm is expressed in granular layer 4 with strong glutamatergic excitation. As a consequence, altering the degree of excitation of the auditory cortex causes bifurcation in the gamma frequency spectrum and can effectively switch temporal control of layer 5 from supragranular to granular layers. Computational modeling predicts the pattern of interlaminar connections may help to stabilize this bifurcation. The data suggest that different strategies are used by primary auditory cortex to represent weak and strong inputs, with principal cell firing rate becoming increasingly important as excitation strength increases. PMID:22114273

  6. Responses of auditory-cortex neurons to structural features of natural sounds.

    PubMed

    Nelken, I; Rotman, Y; Bar Yosef, O

    1999-01-14

    Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.

  7. Effects of musical training on the auditory cortex in children.

    PubMed

    Trainor, Laurel J; Shahin, Antoine; Roberts, Larry E

    2003-11-01

    Several studies of the effects of musical experience on sound representations in the auditory cortex are reviewed. Auditory evoked potentials are compared in response to pure tones, violin tones, and piano tones in adult musicians versus nonmusicians as well as in 4- to 5-year-old children who have either had or not had extensive musical experience. In addition, the effects of auditory frequency discrimination training in adult nonmusicians on auditory evoked potentials are examined. It was found that the P2-evoked response is larger in both adult and child musicians than in nonmusicians and that auditory training enhances this component in nonmusician adults. The results suggest that the P2 is particularly neuroplastic and that the effects of musical experience can be seen early in development. They also suggest that although the effects of musical training on cortical representations may be greater if training begins in childhood, the adult brain is also open to change. These results are discussed with respect to potential benefits of early musical training as well as potential benefits of musical experience in aging.

  8. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain.

    PubMed

    Groen, Iris I A; Silson, Edward H; Baker, Chris I

    2017-02-19

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  9. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain

    PubMed Central

    2017-01-01

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013

  10. Functional MRI of the vocalization-processing network in the macaque brain

    PubMed Central

    Ortiz-Rios, Michael; Kuśmierek, Paweł; DeWitt, Iain; Archakov, Denis; Azevedo, Frederico A. C.; Sams, Mikko; Jääskeläinen, Iiro P.; Keliris, Georgios A.; Rauschecker, Josef P.

    2015-01-01

    Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt. PMID:25883546

  11. Binaural fusion and the representation of virtual pitch in the human auditory cortex.

    PubMed

    Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E

    1996-10-01

    The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.

  12. Primary auditory cortex regulates threat memory specificity.

    PubMed

    Wigestrand, Mattis B; Schiff, Hillary C; Fyhn, Marianne; LeDoux, Joseph E; Sears, Robert M

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used muscimol infusions in rats to show that discriminatory threat learning requires Au1 activity specifically during memory acquisition and retrieval, but not during consolidation. Memory specificity was similarly disrupted by infusion of PKMζ inhibitor peptide (ZIP) during memory storage. Our findings show that Au1 is required at critical memory phases and suggest that Au1 plasticity enables stimulus discrimination. © 2016 Wigestrand et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Abnormal auditory synchronization in stuttering: A magnetoencephalographic study.

    PubMed

    Kikuchi, Yoshikazu; Okamoto, Tsuyoshi; Ogata, Katsuya; Hagiwara, Koichi; Umezaki, Toshiro; Kenjo, Masamutsu; Nakagawa, Takashi; Tobimatsu, Shozo

    2017-02-01

    In a previous magnetoencephalographic study, we showed both functional and structural reorganization of the right auditory cortex and impaired left auditory cortex function in people who stutter (PWS). In the present work, we reevaluated the same dataset to further investigate how the right and left auditory cortices interact to compensate for stuttering. We evaluated bilateral N100m latencies as well as indices of local and inter-hemispheric phase synchronization of the auditory cortices. The left N100m latency was significantly prolonged relative to the right N100m latency in PWS, while healthy control participants did not show any inter-hemispheric differences in latency. A phase-locking factor (PLF) analysis, which indicates the degree of local phase synchronization, demonstrated enhanced alpha-band synchrony in the right auditory area of PWS. A phase-locking value (PLV) analysis of inter-hemispheric synchronization demonstrated significant elevations in the beta band between the right and left auditory cortices in PWS. In addition, right PLF and PLVs were positively correlated with stuttering frequency in PWS. Taken together, our data suggest that increased right hemispheric local phase synchronization and increased inter-hemispheric phase synchronization are electrophysiological correlates of a compensatory mechanism for impaired left auditory processing in PWS. Published by Elsevier B.V.

  14. Separating neural and vascular effects of caffeine using simultaneous EEG–FMRI: Differential effects of caffeine on cognitive and sensorimotor brain responses

    PubMed Central

    Diukova, Ana; Ware, Jennifer; Smith, Jessica E.; Evans, C. John; Murphy, Kevin; Rogers, Peter J.; Wise, Richard G.

    2012-01-01

    The effects of caffeine are mediated through its non-selective antagonistic effects on adenosine A1 and A2A adenosine receptors resulting in increased neuronal activity but also vasoconstriction in the brain. Caffeine, therefore, can modify BOLD FMRI signal responses through both its neural and its vascular effects depending on receptor distributions in different brain regions. In this study we aim to distinguish neural and vascular influences of a single dose of caffeine in measurements of task-related brain activity using simultaneous EEG–FMRI. We chose to compare low-level visual and motor (paced finger tapping) tasks with a cognitive (auditory oddball) task, with the expectation that caffeine would differentially affect brain responses in relation to these tasks. To avoid the influence of chronic caffeine intake, we examined the effect of 250 mg of oral caffeine on 14 non and infrequent caffeine consumers in a double-blind placebo-controlled cross-over study. Our results show that the task-related BOLD signal change in visual and primary motor cortex was significantly reduced by caffeine, while the amplitude and latency of visual evoked potentials over occipital cortex remained unaltered. However, during the auditory oddball task (target versus non-target stimuli) caffeine significantly increased the BOLD signal in frontal cortex. Correspondingly, there was also a significant effect of caffeine in reducing the target evoked response potential (P300) latency in the oddball task and this was associated with a positive potential over frontal cortex. Behavioural data showed that caffeine also improved performance in the oddball task with a significantly reduced number of missed responses. Our results are consistent with earlier studies demonstrating altered flow-metabolism coupling after caffeine administration in the context of our observation of a generalised caffeine-induced reduction in cerebral blood flow demonstrated by arterial spin labelling (19% reduction over grey matter). We were able to identify vascular effects and hence altered neurovascular coupling through the alteration of low-level task FMRI responses in the face of a preserved visual evoked potential. However, our data also suggest a cognitive effect of caffeine through its positive effect on the frontal BOLD signal consistent with the shortening of oddball EEG response latency. The combined use of EEG–FMRI is a promising methodology for investigating alterations in brain function in drug and disease studies where neurovascular coupling may be altered on a regional basis. PMID:22561357

  15. Elevated correlations in neuronal ensembles of mouse auditory cortex following parturition.

    PubMed

    Rothschild, Gideon; Cohen, Lior; Mizrahi, Adi; Nelken, Israel

    2013-07-31

    The auditory cortex is malleable by experience. Previous studies of auditory plasticity have described experience-dependent changes in response profiles of single neurons or changes in global tonotopic organization. However, experience-dependent changes in the dynamics of local neural populations have remained unexplored. In this study, we examined the influence of a dramatic yet natural experience in the life of female mice, giving birth and becoming a mother on single neurons and neuronal ensembles in the primary auditory cortex (A1). Using in vivo two-photon calcium imaging and electrophysiological recordings from layer 2/3 in A1 of mothers and age-matched virgin mice, we monitored changes in the responses to a set of artificial and natural sounds. Population dynamics underwent large changes as measured by pairwise and higher-order correlations, with noise correlations increasing as much as twofold in lactating mothers. Concomitantly, changes in response properties of single neurons were modest and selective. Remarkably, despite the large changes in correlations, information about stimulus identity remained essentially the same in the two groups. Our results demonstrate changes in the correlation structure of neuronal activity as a result of a natural life event.

  16. Familiarity with a Vocal Category Biases the Compartmental Expression of "Arc/Arg3.1" in Core Auditory Cortex

    ERIC Educational Resources Information Center

    Ivanova, Tamara N.; Gross, Christina; Mappus, Rudolph C.; Kwon, Yong Jun; Bassell, Gary J.; Liu, Robert C.

    2017-01-01

    Learning to recognize a stimulus category requires experience with its many natural variations. However, the mechanisms that allow a category's sensorineural representation to be updated after experiencing new exemplars are not well understood, particularly at the molecular level. Here we investigate how a natural vocal category induces expression…

  17. Retrosplenial cortex is required for the retrieval of remote memory for auditory cues.

    PubMed

    Todd, Travis P; Mehlman, Max L; Keene, Christopher S; DeAngeli, Nicole E; Bucci, David J

    2016-06-01

    The restrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of the RSC to recently acquired auditory fear memories. Since neocortical regions have been implicated in the permanent storage of remote memories, we examined the contribution of the RSC to remotely acquired auditory fear memories. In Experiment 1, retrieval of a remotely acquired auditory fear memory was impaired when permanent lesions (either electrolytic or neurotoxic) were made several weeks after initial conditioning. In Experiment 2, using a chemogenetic approach, we observed impairments in the retrieval of remote memory for an auditory cue when the RSC was temporarily inactivated during testing. In Experiment 3, after injection of a retrograde tracer into the RSC, we observed labeled cells in primary and secondary auditory cortices, as well as the claustrum, indicating that the RSC receives direct projections from auditory regions. Overall our results indicate the RSC has a critical role in the retrieval of remotely acquired auditory fear memories, and we suggest this is related to the quality of the memory, with less precise memories being RSC dependent. © 2016 Todd et al.; Published by Cold Spring Harbor Laboratory Press.

  18. Impairment of Auditory-Motor Timing and Compensatory Reorganization after Ventral Premotor Cortex Stimulation

    PubMed Central

    Kornysheva, Katja; Schubotz, Ricarda I.

    2011-01-01

    Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657

  19. Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex.

    PubMed

    Greenlee, Jeremy D W; Behroozmand, Roozbeh; Larson, Charles R; Jackson, Adam W; Chen, Fangxiang; Hansen, Daniel R; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.

  20. Sensory-Motor Interactions for Vocal Pitch Monitoring in Non-Primary Human Auditory Cortex

    PubMed Central

    Larson, Charles R.; Jackson, Adam W.; Chen, Fangxiang; Hansen, Daniel R.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control. PMID:23577157

  1. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  2. Neural correlates of auditory short-term memory in rostral superior temporal cortex

    PubMed Central

    Scott, Brian H.; Mishkin, Mortimer; Yin, Pingbo

    2014-01-01

    Summary Background Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. Results We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed-match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing, and in their resistance to sounds intervening between the sample and match. Conclusions Like the monkeys’ behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. PMID:25456448

  3. Separating pitch chroma and pitch height in the human brain

    PubMed Central

    Warren, J. D.; Uppenkamp, S.; Patterson, R. D.; Griffiths, T. D.

    2003-01-01

    Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas. PMID:12909719

  4. Separating pitch chroma and pitch height in the human brain.

    PubMed

    Warren, J D; Uppenkamp, S; Patterson, R D; Griffiths, T D

    2003-08-19

    Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas.

  5. Evidence for distinct human auditory cortex regions for sound location versus identity processing

    PubMed Central

    Ahveninen, Jyrki; Huang, Samantha; Nummenmaa, Aapo; Belliveau, John W.; Hung, An-Yi; Jääskeläinen, Iiro P.; Rauschecker, Josef P.; Rossi, Stephanie; Tiitinen, Hannu; Raij, Tommi

    2014-01-01

    Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC. PMID:24121634

  6. Role of medio-dorsal frontal and posterior parietal neurons during auditory detection performance in rats.

    PubMed

    Bohon, Kaitlin S; Wiest, Michael C

    2014-01-01

    To further characterize the role of frontal and parietal cortices in rat cognition, we recorded action potentials simultaneously from multiple sites in the medio-dorsal frontal cortex and posterior parietal cortex of rats while they performed a two-choice auditory detection task. We quantified neural correlates of task performance, including response movements, perception of a target tone, and the differentiation between stimuli with distinct features (different pitches or durations). A minority of units--15% in frontal cortex, 23% in parietal cortex--significantly distinguished hit trials (successful detections, response movement to the right) from correct rejection trials (correct leftward response to the absence of the target tone). Estimating the contribution of movement-related activity to these responses suggested that more than half of these units were likely signaling correct perception of the auditory target, rather than merely movement direction. In addition, we found a smaller and mostly not overlapping population of units that differentiated stimuli based on task-irrelevant details. The detection-related spiking responses we observed suggest that correlates of perception in the rat are sparsely represented among neurons in the rat's frontal-parietal network, without being concentrated preferentially in frontal or parietal areas.

  7. Sound Frequency Representation in the Auditory Cortex of the Common Marmoset Visualized Using Optical Intrinsic Signal Imaging

    PubMed Central

    Tani, Toshiki; Abe, Hiroshi; Hayami, Taku; Banno, Taku; Kitamura, Naohito; Mashiko, Hiromi

    2018-01-01

    Abstract Natural sound is composed of various frequencies. Although the core region of the primate auditory cortex has functionally defined sound frequency preference maps, how the map is organized in the auditory areas of the belt and parabelt regions is not well known. In this study, we investigated the functional organizations of the core, belt, and parabelt regions encompassed by the lateral sulcus and the superior temporal sulcus in the common marmoset (Callithrix jacchus). Using optical intrinsic signal imaging, we obtained evoked responses to band-pass noise stimuli in a range of sound frequencies (0.5–16 kHz) in anesthetized adult animals and visualized the preferred sound frequency map on the cortical surface. We characterized the functionally defined organization using histologically defined brain areas in the same animals. We found tonotopic representation of a set of sound frequencies (low to high) within the primary (A1), rostral (R), and rostrotemporal (RT) areas of the core region. In the belt region, the tonotopic representation existed only in the mediolateral (ML) area. This representation was symmetric with that found in A1 along the border between areas A1 and ML. The functional structure was not very clear in the anterolateral (AL) area. Low frequencies were mainly preferred in the rostrotemplatal (RTL) area, while high frequencies were preferred in the caudolateral (CL) area. There was a portion of the parabelt region that strongly responded to higher sound frequencies (>5.8 kHz) along the border between the rostral parabelt (RPB) and caudal parabelt (CPB) regions. PMID:29736410

  8. Primary Auditory Cortex is Required for Anticipatory Motor Response.

    PubMed

    Li, Jingcheng; Liao, Xiang; Zhang, Jianxiong; Wang, Meng; Yang, Nian; Zhang, Jun; Lv, Guanghui; Li, Haohong; Lu, Jian; Ding, Ran; Li, Xingyi; Guang, Yu; Yang, Zhiqi; Qin, Han; Jin, Wenjun; Zhang, Kuan; He, Chao; Jia, Hongbo; Zeng, Shaoqun; Hu, Zhian; Nelken, Israel; Chen, Xiaowei

    2017-06-01

    The ability of the brain to predict future events based on the pattern of recent sensory experience is critical for guiding animal's behavior. Neocortical circuits for ongoing processing of sensory stimuli are extensively studied, but their contributions to the anticipation of upcoming sensory stimuli remain less understood. We, therefore, used in vivo cellular imaging and fiber photometry to record mouse primary auditory cortex to elucidate its role in processing anticipated stimulation. We found neuronal ensembles in layers 2/3, 4, and 5 which were activated in relationship to anticipated sound events following rhythmic stimulation. These neuronal activities correlated with the occurrence of anticipatory motor responses in an auditory learning task. Optogenetic manipulation experiments revealed an essential role of such neuronal activities in producing the anticipatory behavior. These results strongly suggest that the neural circuits of primary sensory cortex are critical for coding predictive information and transforming it into anticipatory motor behavior. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition

    PubMed Central

    Norman-Haignere, Sam

    2015-01-01

    SUMMARY The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles (“components”) whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech. PMID:26687225

  10. Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition.

    PubMed

    Norman-Haignere, Sam; Kanwisher, Nancy G; McDermott, Josh H

    2015-12-16

    The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Using fNIRS to Examine Occipital and Temporal Responses to Stimulus Repetition in Young Infants: Evidence of Selective Frontal Cortex Involvement

    PubMed Central

    Emberson, Lauren L.; Cannon, Grace; Palmeri, Holly; Richards, John E.; Aslin, Richard N.

    2016-01-01

    How does the developing brain respond to recent experience? Repetition suppression (RS) is a robust and well-characterized response of to recent experience found, predominantly, in the perceptual cortices of the adult brain. We use functional near-infrared spectroscopy (fNIRS) to investigate how perceptual (temporal and occipital) and frontal cortices in the infant brain respond to auditory and visual stimulus repetitions (spoken words and faces). In Experiment 1, we find strong evidence of repetition suppression in the frontal cortex but only for auditory stimuli. In perceptual cortices, we find only suggestive evidence of auditory RS in the temporal cortex and no evidence of visual RS in any ROI. In Experiments 2 and 3, we replicate and extend these findings. Overall, we provide the first evidence that infant and adult brains respond differently to stimulus repetition. We suggest that the frontal lobe may support the development of RS in perceptual cortices. PMID:28012401

  12. Involvement of the human midbrain and thalamus in auditory deviance detection.

    PubMed

    Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César

    2015-02-01

    Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Spectral context affects temporal processing in awake auditory cortex

    PubMed Central

    Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.

    2013-01-01

    Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811

  14. Protective Effects of Ginkgo biloba Extract EGb 761 against Noise Trauma-Induced Hearing Loss and Tinnitus Development

    PubMed Central

    Korn, Sabine

    2014-01-01

    Noise-induced hearing loss (NIHL) and resulting comorbidities like subjective tinnitus are common diseases in modern societies. A substance shown to be effective against NIHL in an animal model is the Ginkgo biloba extract EGb 761. Further effects of the extract on the cellular and systemic levels of the nervous system make it a promising candidate not only for protection against NIHL but also for its secondary comorbidities like tinnitus. Following an earlier study we here tested the potential effectiveness of prophylactic EGb 761 treatment against NIHL and tinnitus development in the Mongolian gerbil. We monitored the effects of EGb 761 and noise trauma-induced changes on signal processing within the auditory system by means of behavioral and electrophysiological approaches. We found significantly reduced NIHL and tinnitus development upon EGb 761 application, compared to vehicle treated animals. These protective effects of EGb 761 were correlated with changes in auditory processing, both at peripheral and central levels. We propose a model with two main effects of EGb 761 on auditory processing, first, an increase of auditory brainstem activity leading to an increased thalamic input to the primary auditory cortex (AI) and second, an asymmetric effect on lateral inhibition in AI. PMID:25028612

  15. Low-level information and high-level perception: the case of speech in noise.

    PubMed

    Nahum, Mor; Nelken, Israel; Ahissar, Merav

    2008-05-20

    Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.

  16. Immersive audiomotor game play enhances neural and perceptual salience of weak signals in noise

    PubMed Central

    Whitton, Jonathon P.; Hancock, Kenneth E.; Polley, Daniel B.

    2014-01-01

    All sensory systems face the fundamental challenge of encoding weak signals in noisy backgrounds. Although discrimination abilities can improve with practice, these benefits rarely generalize to untrained stimulus dimensions. Inspired by recent findings that action video game training can impart a broader spectrum of benefits than traditional perceptual learning paradigms, we trained adult humans and mice in an immersive audio game that challenged them to forage for hidden auditory targets in a 2D soundscape. Both species learned to modulate their angular search vectors and target approach velocities based on real-time changes in the level of a weak tone embedded in broadband noise. In humans, mastery of this tone in noise task generalized to an improved ability to comprehend spoken sentences in speech babble noise. Neural plasticity in the auditory cortex of trained mice supported improved decoding of low-intensity sounds at the training frequency and an enhanced resistance to interference from background masking noise. These findings highlight the potential to improve the neural and perceptual salience of degraded sensory stimuli through immersive computerized games. PMID:24927596

  17. Learning Midlevel Auditory Codes from Natural Sound Statistics.

    PubMed

    Młynarski, Wiktor; McDermott, Josh H

    2018-03-01

    Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.

  18. Immersive audiomotor game play enhances neural and perceptual salience of weak signals in noise.

    PubMed

    Whitton, Jonathon P; Hancock, Kenneth E; Polley, Daniel B

    2014-06-24

    All sensory systems face the fundamental challenge of encoding weak signals in noisy backgrounds. Although discrimination abilities can improve with practice, these benefits rarely generalize to untrained stimulus dimensions. Inspired by recent findings that action video game training can impart a broader spectrum of benefits than traditional perceptual learning paradigms, we trained adult humans and mice in an immersive audio game that challenged them to forage for hidden auditory targets in a 2D soundscape. Both species learned to modulate their angular search vectors and target approach velocities based on real-time changes in the level of a weak tone embedded in broadband noise. In humans, mastery of this tone in noise task generalized to an improved ability to comprehend spoken sentences in speech babble noise. Neural plasticity in the auditory cortex of trained mice supported improved decoding of low-intensity sounds at the training frequency and an enhanced resistance to interference from background masking noise. These findings highlight the potential to improve the neural and perceptual salience of degraded sensory stimuli through immersive computerized games.

  19. Deviance sensitivity in the auditory cortex of freely moving rats

    PubMed Central

    2018-01-01

    Deviance sensitivity is the specific response to a surprising stimulus, one that violates expectations set by the past stimulation stream. In audition, deviance sensitivity is often conflated with stimulus-specific adaptation (SSA), the decrease in responses to a common stimulus that only partially generalizes to other, rare stimuli. SSA is usually measured using oddball sequences, where a common (standard) tone and a rare (deviant) tone are randomly intermixed. However, the larger responses to a tone when deviant does not necessarily represent deviance sensitivity. Deviance sensitivity is commonly tested using a control sequence in which many different tones serve as the standard, eliminating the expectations set by the standard ('deviant among many standards'). When the response to a tone when deviant (against a single standard) is larger than the responses to the same tone in the control sequence, it is concluded that true deviance sensitivity occurs. In primary auditory cortex of anesthetized rats, responses to deviants and to the same tones in the control condition are comparable in size. We recorded local field potentials and multiunit activity from the auditory cortex of awake, freely moving rats, implanted with 32-channel drivable microelectrode arrays and using telemetry. We observed highly significant SSA in the awake state. Moreover, the responses to a tone when deviant were significantly larger than the responses to the same tone in the control condition. These results establish the presence of true deviance sensitivity in primary auditory cortex in awake rats. PMID:29874246

  20. Listening to Filtered Music as a Treatment Option for Tinnitus: A Review

    PubMed Central

    Wilson, E. Courtenay; Schlaug, Gottfried; Pantev, Christo

    2010-01-01

    TINNITUS IS THE PERCEPTION OF A SOUND IN THE absence of an external acoustic stimulus and it affects roughly 10-15% of the population. This review will discuss the different types of tinnitus and the current research on the underlying neural substrates of subjective tinnitus. Specific focus will be paid to the plasticity of the auditory cortex, the inputs from non-auditory centers in the central nervous system and how these are affected by tinnitus. We also will discuss several therapies that utilize music as a treatment for tinnitus and highlight a novel method that filters out the tinnitus frequency from the music, leveraging the plasticity in the auditory cortex as a means of reducing the impact of tinnitus. PMID:21170296

  1. Auditory Neuroscience: Temporal Anticipation Enhances Cortical Processing

    PubMed Central

    Walker, Kerry M. M.; King, Andrew J.

    2015-01-01

    Summary A recent study shows that expectation about the timing of behaviorally-relevant sounds enhances the responses of neurons in the primary auditory cortex and improves the accuracy and speed with which animals respond to those sounds. PMID:21481759

  2. Auditory Processing, Speech Perception and Phonological Ability in Pre-School Children at High-Risk for Dyslexia: A Longitudinal Study of the Auditory Temporal Processing Theory

    ERIC Educational Resources Information Center

    Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol

    2007-01-01

    This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school…

  3. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  4. AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX

    PubMed Central

    Weinberger, Norman M.

    2009-01-01

    Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002

  5. Familiarity with a vocal category biases the compartmental expression of Arc/Arg3.1 in core auditory cortex.

    PubMed

    Ivanova, Tamara N; Gross, Christina; Mappus, Rudolph C; Kwon, Yong Jun; Bassell, Gary J; Liu, Robert C

    2017-12-01

    Learning to recognize a stimulus category requires experience with its many natural variations. However, the mechanisms that allow a category's sensorineural representation to be updated after experiencing new exemplars are not well understood, particularly at the molecular level. Here we investigate how a natural vocal category induces expression in the auditory system of a key synaptic plasticity effector immediate early gene, Arc/Arg3.1 , which is required for memory consolidation. We use the ultrasonic communication system between mouse pups and adult females to study whether prior familiarity with pup vocalizations alters how Arc is engaged in the core auditory cortex after playback of novel exemplars from the pup vocal category. A computerized, 3D surface-assisted cellular compartmental analysis, validated against manual cell counts, demonstrates significant changes in the recruitment of neurons expressing Arc in pup-experienced animals (mothers and virgin females "cocaring" for pups) compared with pup-inexperienced animals (pup-naïve virgins), especially when listening to more familiar, natural calls compared to less familiar but similarly recognized tonal model calls. Our data support the hypothesis that the kinetics of Arc induction to refine cortical representations of sensory categories is sensitive to the familiarity of the sensory experience. © 2017 Ivanova et al.; Published by Cold Spring Harbor Laboratory Press.

  6. Active listening: task-dependent plasticity of spectrotemporal receptive fields in primary auditory cortex.

    PubMed

    Fritz, Jonathan; Elhilali, Mounya; Shamma, Shihab

    2005-08-01

    Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.

  7. Effect of acute swim stress on plasma corticosterone and brain monoamine levels in bidirectionally selected DxH recombinant inbred mouse strains differing in fear recall and extinction.

    PubMed

    Browne, Caroline A; Hanke, Joachim; Rose, Claudia; Walsh, Irene; Foley, Tara; Clarke, Gerard; Schwegler, Herbert; Cryan, John F; Yilmazer-Hanke, Deniz

    2014-12-01

    Stress-induced changes in plasma corticosterone and central monoamine levels were examined in mouse strains that differ in fear-related behaviors. Two DxH recombinant inbred mouse strains with a DBA/2J background, which were originally bred for a high (H-FSS) and low fear-sensitized acoustic startle reflex (L-FSS), were used. Levels of noradrenaline, dopamine, and serotonin and their metabolites 3,4-dihydroxyphenyacetic acid (DOPAC), homovanillic acid (HVA), and 5-hydroxyindoleacetic acid (5-HIAA) were studied in the amygdala, hippocampus, medial prefrontal cortex, striatum, hypothalamus and brainstem. H-FSS mice exhibited increased fear levels and a deficit in fear extinction (within-session) in the auditory fear-conditioning test, and depressive-like behavior in the acute forced swim stress test. They had higher tissue noradrenaline and serotonin levels and lower dopamine and serotonin turnover under basal conditions, although they were largely insensitive to stress-induced changes in neurotransmitter metabolism. In contrast, acute swim stress increased monoamine levels but decreased turnover in the less fearful L-FSS mice. L-FSS mice also showed a trend toward higher basal and stress-induced corticosterone levels and an increase in noradrenaline and serotonin in the hypothalamus and brainstem 30 min after stress compared to H-FSS mice. Moreover, the dopaminergic system was activated differentially in the medial prefrontal cortex and striatum of the two strains by acute stress. Thus, H-FSS mice showed increased basal noradrenaline tissue levels compatible with a fear phenotype or chronic stressed condition. Low corticosterone levels and the poor monoamine response to stress in H-FSS mice may point to mechanisms similar to those found in principal fear disorders or post-traumatic stress disorder.

  8. Effect of Acute Swim Stress on Plasma Corticosterone and Brain Monoamine Levels in Bidirectionally Selected DxH Recombinant Inbred Mouse Strains Differing in Fear Recall and Extinction

    PubMed Central

    Browne, Caroline A.; Hanke, Joachim; Rose, Claudia; Walsh, Irene; Foley, Tara; Clarke, Gerard; Schwegler, Herbert; Cryan, John F.; Yilmazer-Hanke, Deniz

    2015-01-01

    Stress-induced changes in plasma corticosterone and central monoamine levels were examined in mouse strains that differ in fear-related behaviors. Two DxH recombinant inbred mouse strains with a DBA/2J background, which were originally bred for a high (H-FSS) and low fear-sensitized acoustic startle reflex (L-FSS), were used. Levels of noradrenaline, dopamine, and serotonin and their metabolites (DOPAC), homovanillic acid (HVA), and 5-hydroxyindoleacetic acid (5-HIAA) were studied in the amygdala, hippocampus, medial prefrontal cortex, striatum, hypothalamus, and brainstem. H-FSS mice exhibited increased fear levels and a deficit in fear extinction (within-session) in the auditory fear-conditioning test, and depressive-like behavior in the acute forced swim stress test. They had higher tissue noradrenaline and serotonin levels and lower dopamine and serotonin turnover under basal conditions, although they were largely insensitive to stress-induced changes in neurotransmitter metabolism. In contrast, acute swim stress increased monoamine levels but decreased turnover in the less fearful L-FSS mice. L-FSS mice also showed a trend toward higher basal and stress-induced corticosterone levels and an increase in noradrenaline and serotonin in the hypothalamus and brainstem 30 minutes after stress compared to H-FSS mice. Moreover, the dopaminergic system was activated differentially in the medial prefrontal cortex and striatum of the two strains by acute stress. Thus, H-FSS mice showed increased basal noradrenaline tissue levels compatible with a fear phenotype or chronic stressed condition. Low corticosterone levels and the poor monoamine response to stress in H-FSS mice may point to mechanisms similar to those found in principal fear disorders or posttraumatic stress disorder. PMID:25117886

  9. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    PubMed Central

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  10. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution.

    PubMed

    Hertz, Uri; Amedi, Amir

    2015-08-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.

  11. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising

    PubMed Central

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework’s simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications. PMID:28350887

  12. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    PubMed

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  13. Repeated restraint stress impairs auditory attention and GABAergic synaptic efficacy in the rat auditory cortex.

    PubMed

    Pérez, Miguel Ángel; Pérez-Valenzuela, Catherine; Rojas-Thomas, Felipe; Ahumada, Juan; Fuenzalida, Marco; Dagnino-Subiabre, Alexies

    2013-08-29

    Chronic stress induces dendritic atrophy in the rat primary auditory cortex (A1), a key brain area for auditory attention. The aim of this study was to determine whether repeated restraint stress affects auditory attention and synaptic transmission in A1. Male Sprague-Dawley rats were trained in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance over 80% of correct trials in the 2-ACT were randomly assigned to control and restraint stress experimental groups. To analyze the effects of restraint stress on the auditory attention, trained rats of both groups were subjected to 50 2-ACT trials one day before and one day after of the stress period. A difference score was determined by subtracting the number of correct trials after from those before the stress protocol. Another set of rats was used to study the synaptic transmission in A1. Restraint stress decreased the number of correct trials by 28% compared to the performance of control animals (p < 0.001). Furthermore, stress reduced the frequency of spontaneous inhibitory postsynaptic currents (sIPSC) and miniature IPSC in A1, whereas glutamatergic efficacy was not affected. Our results demonstrate that restraint stress decreased auditory attention and GABAergic synaptic efficacy in A1. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Testing the dual-pathway model for auditory processing in human cortex.

    PubMed

    Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto

    2016-01-01

    Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Reduced frontal theta oscillations indicate altered crossmodal prediction error processing in schizophrenia

    PubMed Central

    Keil, Julian; Balz, Johanna; Gallinat, Jürgen; Senkowski, Daniel

    2016-01-01

    Our brain generates predictions about forthcoming stimuli and compares predicted with incoming input. Failures in predicting events might contribute to hallucinations and delusions in schizophrenia (SZ). When a stimulus violates prediction, neural activity that reflects prediction error (PE) processing is found. While PE processing deficits have been reported in unisensory paradigms, it is unknown whether SZ patients (SZP) show altered crossmodal PE processing. We measured high-density electroencephalography and applied source estimation approaches to investigate crossmodal PE processing generated by audiovisual speech. In SZP and healthy control participants (HC), we used an established paradigm in which high- and low-predictive visual syllables were paired with congruent or incongruent auditory syllables. We examined crossmodal PE processing in SZP and HC by comparing differences in event-related potentials and neural oscillations between incongruent and congruent high- and low-predictive audiovisual syllables. In both groups event-related potentials between 206 and 250 ms were larger in high- compared with low-predictive syllables, suggesting intact audiovisual incongruence detection in the auditory cortex of SZP. The analysis of oscillatory responses revealed theta-band (4–7 Hz) power enhancement in high- compared with low-predictive syllables between 230 and 370 ms in the frontal cortex of HC but not SZP. Thus aberrant frontal theta-band oscillations reflect crossmodal PE processing deficits in SZ. The present study suggests a top-down multisensory processing deficit and highlights the role of dysfunctional frontal oscillations for the SZ psychopathology. PMID:27358314

  16. The differential contributions of pFC and temporo-parietal cortex to multimodal semantic control: exploring refractory effects in semantic aphasia.

    PubMed

    Gardner, Hannah E; Lambon Ralph, Matthew A; Dodds, Naomi; Jones, Theresa; Ehsan, Sheeba; Jefferies, Elizabeth

    2012-04-01

    Aphasic patients with multimodal semantic impairment following pFC or temporo-parietal (TP) cortex damage (semantic aphasia [SA]) have deficits characterized by poor control of semantic activation/retrieval, as opposed to loss of semantic knowledge per se. In line with this, SA patients show "refractory effects"; that is, declining accuracy in cyclical word-picture matching tasks when semantically related sets are presented rapidly and repeatedly. This is argued to follow a build-up of competition between targets and distractors. However, the link between poor semantic control and refractory effects is still controversial for two reasons. (1) Some theories propose that refractory effects are specific to verbal or auditory tasks, yet SA patients show poor control over semantic processing in both word and picture semantic tasks. (2) SA can result from lesions to either the left pFC or TP cortex, yet previous work suggests that refractory effects are specifically linked to the left inferior frontal cortex. For the first time, verbal, visual, and nonverbal auditory refractory effects were explored in nine SA patients who had pFC (pFC+) or TP cortex (TP-only) lesions. In all modalities, patient accuracy declined significantly over repetitions. This refractory effect at the group level was driven by pFC+ patients and was not shown by individuals with TP-only lesions. These findings support the theory that SA patients have reduced control over multimodal semantic retrieval and, additionally, suggest there may be functional specialization within the posterior versus pFC elements of the semantic control network.

  17. Gap-induced reductions of evoked potentials in the auditory cortex: A possible objective marker for the presence of tinnitus in animals.

    PubMed

    Berger, Joel I; Owen, William; Wilson, Caroline A; Hockley, Adam; Coomber, Ben; Palmer, Alan R; Wallace, Mark N

    2018-01-15

    Animal models of tinnitus are essential for determining the underlying mechanisms and testing pharmacotherapies. However, there is doubt over the validity of current behavioural methods for detecting tinnitus. Here, we applied a stimulus paradigm widely used in a behavioural test (gap-induced inhibition of the acoustic startle reflex GPIAS) whilst recording from the auditory cortex, and showed neural response changes that mirror those found in the behavioural tests. We implanted guinea pigs (GPs) with electrocorticographic (ECoG) arrays and recorded baseline auditory cortical responses to a startling stimulus. When a gap was inserted in otherwise continuous background noise prior to the startling stimulus, there was a clear reduction in the subsequent evoked response (termed gap-induced reductions in evoked potentials; GIREP), suggestive of a neural analogue of the GPIAS test. We then unilaterally exposed guinea pigs to narrowband noise (left ear; 8-10 kHz; 1 h) at one of two different sound levels - either 105 dB SPL or 120 dB SPL - and recorded the same responses seven-to-ten weeks following the noise exposure. Significant deficits in GIREP were observed for all areas of the auditory cortex (AC) in the 120 dB-exposed GPs, but not in the 105 dB-exposed GPs. These deficits could not simply be accounted for by changes in response amplitudes. Furthermore, in the contralateral (right) caudal AC we observed a significant increase in evoked potential amplitudes across narrowband background frequencies in both 105 dB and 120 dB-exposed GPs. Taken in the context of the large body of literature that has used the behavioural test as a demonstration of the presence of tinnitus, these results are suggestive of objective neural correlates of the presence of noise-induced tinnitus and hyperacusis. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Decoding Multiple Sound Categories in the Human Temporal Cortex Using High Resolution fMRI

    PubMed Central

    Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C. M.

    2015-01-01

    Perception of sound categories is an important aspect of auditory perception. The extent to which the brain’s representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases. PMID:25692885

  19. Decoding multiple sound categories in the human temporal cortex using high resolution fMRI.

    PubMed

    Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C M

    2015-01-01

    Perception of sound categories is an important aspect of auditory perception. The extent to which the brain's representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.

  20. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    PubMed

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

Top