Sample records for cortical auditory processing

  1. Spatial processing in the auditory cortex of the macaque monkey

    NASA Astrophysics Data System (ADS)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  2. Top-down modulation of visual and auditory cortical processing in aging.

    PubMed

    Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M

    2015-02-01

    Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Phonological Processing in Human Auditory Cortical Fields

    PubMed Central

    Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.

    2011-01-01

    We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252

  4. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  5. Cortical Development and Neuroplasticity in Auditory Neuropathy Spectrum Disorder

    PubMed Central

    Sharma, Anu; Cardon, Garrett

    2015-01-01

    Cortical development is dependent to a large extent on stimulus-driven input. Auditory Neuropathy Spectrum Disorder (ANSD) is a recently described form of hearing impairment where neural dys-synchrony is the predominant characteristic. Children with ANSD provide a unique platform to examine the effects of asynchronous and degraded afferent stimulation on cortical auditory neuroplasticity and behavioral processing of sound. In this review, we describe patterns of auditory cortical maturation in children with ANSD. The disruption of cortical maturation that leads to these various patterns includes high levels of intra-individual cortical variability and deficits in cortical phase synchronization of oscillatory neural responses. These neurodevelopmental changes, which are constrained by sensitive periods for central auditory maturation, are correlated with behavioral outcomes for children with ANSD. Overall, we hypothesize that patterns of cortical development in children with ANSD appear to be markers of the severity of the underlying neural dys-synchrony, providing prognostic indicators of success of clinical intervention with amplification and/or electrical stimulation. PMID:26070426

  6. Cerebral responses to local and global auditory novelty under general anesthesia

    PubMed Central

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2017-01-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046

  7. Electrophysiological Evidence for the Sources of the Masking Level Difference.

    PubMed

    Fowler, Cynthia G

    2017-08-16

    The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD). A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD. Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD. The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.

  8. Injury- and Use-Related Plasticity in the Adult Auditory System.

    ERIC Educational Resources Information Center

    Irvine, Dexter R. F.

    2000-01-01

    This article discusses findings concerning the plasticity of auditory cortical processing mechanisms in adults, including the effects of restricted cochlear damage or behavioral training with acoustic stimuli on the frequency selectivity of auditory cortical neurons and evidence for analogous injury- and use-related plasticity in the adult human…

  9. Magnetoencephalographic Imaging of Auditory and Somatosensory Cortical Responses in Children with Autism and Sensory Processing Dysfunction

    PubMed Central

    Demopoulos, Carly; Yu, Nina; Tripp, Jennifer; Mota, Nayara; Brandes-Aitken, Anne N.; Desai, Shivani S.; Hill, Susanna S.; Antovich, Ashley D.; Harris, Julia; Honma, Susanne; Mizuiri, Danielle; Nagarajan, Srikantan S.; Marco, Elysa J.

    2017-01-01

    This study compared magnetoencephalographic (MEG) imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18), those with sensory processing dysfunction (SPD; N = 13) who do not meet ASD criteria, and typically developing control (TDC; N = 19) participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain. PMID:28603492

  10. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732

  11. Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes

    ERIC Educational Resources Information Center

    Getzmann, Stephan

    2009-01-01

    The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…

  12. Frequency preference and attention effects across cortical depths in the human primary auditory cortex.

    PubMed

    De Martino, Federico; Moerel, Michelle; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia

    2015-12-29

    Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.

  13. Cortical modulation of auditory processing in the midbrain

    PubMed Central

    Bajo, Victoria M.; King, Andrew J.

    2013-01-01

    In addition to their ascending pathways that originate at the receptor cells, all sensory systems are characterized by extensive descending projections. Although the size of these connections often outweighs those that carry information in the ascending auditory pathway, we still have a relatively poor understanding of the role they play in sensory processing. In the auditory system one of the main corticofugal projections links layer V pyramidal neurons with the inferior colliculus (IC) in the midbrain. All auditory cortical fields contribute to this projection, with the primary areas providing the largest outputs to the IC. In addition to medium and large pyramidal cells in layer V, a variety of cell types in layer VI make a small contribution to the ipsilateral corticocollicular projection. Cortical neurons innervate the three IC subdivisions bilaterally, although the contralateral projection is relatively small. The dorsal and lateral cortices of the IC are the principal targets of corticocollicular axons, but input to the central nucleus has also been described in some studies and is distinctive in its laminar topographic organization. Focal electrical stimulation and inactivation studies have shown that the auditory cortex can modify almost every aspect of the response properties of IC neurons, including their sensitivity to sound frequency, intensity, and location. Along with other descending pathways in the auditory system, the corticocollicular projection appears to continually modulate the processing of acoustical signals at subcortical levels. In particular, there is growing evidence that these circuits play a critical role in the plasticity of neural processing that underlies the effects of learning and experience on auditory perception by enabling changes in cortical response properties to spread to subcortical nuclei. PMID:23316140

  14. Cortical mechanisms for the segregation and representation of acoustic textures.

    PubMed

    Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D

    2010-02-10

    Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.

  15. Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.

    PubMed

    Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S

    2013-02-01

    Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.

  16. Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.

    PubMed

    Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M

    2003-05-13

    Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.

  17. Early Blindness Results in Developmental Plasticity for Auditory Motion Processing within Auditory and Occipital Cortex

    PubMed Central

    Jiang, Fang; Stecker, G. Christopher; Boynton, Geoffrey M.; Fine, Ione

    2016-01-01

    Early blind subjects exhibit superior abilities for processing auditory motion, which are accompanied by enhanced BOLD responses to auditory motion within hMT+ and reduced responses within right planum temporale (rPT). Here, by comparing BOLD responses to auditory motion in hMT+ and rPT within sighted controls, early blind, late blind, and sight-recovery individuals, we were able to separately examine the effects of developmental and adult visual deprivation on cortical plasticity within these two areas. We find that both the enhanced auditory motion responses in hMT+ and the reduced functionality in rPT are driven by the absence of visual experience early in life; neither loss nor recovery of vision later in life had a discernable influence on plasticity within these areas. Cortical plasticity as a result of blindness has generally be presumed to be mediated by competition across modalities within a given cortical region. The reduced functionality within rPT as a result of early visual loss implicates an additional mechanism for cross modal plasticity as a result of early blindness—competition across different cortical areas for functional role. PMID:27458357

  18. Visual Processing Recruits the Auditory Cortices in Prelingually Deaf Children and Influences Cochlear Implant Outcomes.

    PubMed

    Liang, Maojin; Chen, Yuebo; Zhao, Fei; Zhang, Junpeng; Liu, Jiahao; Zhang, Xueyuan; Cai, Yuexin; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-09-01

    Although visual processing recruitment of the auditory cortices has been reported previously in prelingually deaf children who have a rapidly developing brain and no auditory processing, the visual processing recruitment of auditory cortices might be different in processing different visual stimuli and may affect cochlear implant (CI) outcomes. Ten prelingually deaf children, 4 to 6 years old, were recruited for the study. Twenty prelingually deaf subjects, 4 to 6 years old with CIs for 1 year, were also recruited; 10 with well-performing CIs, 10 with poorly performing CIs. Ten age and sex-matched normal-hearing children were recruited as controls. Visual ("sound" photo [photograph with imaginative sound] and "nonsound" photo [photograph without imaginative sound]) evoked potentials were measured in all subjects. P1 at Oz and N1 at the bilateral temporal-frontal areas (FC3 and FC4) were compared. N1 amplitudes were strongest in the deaf children, followed by those with poorly performing CIs, controls and those with well-performing CIs. There was no significant difference between controls and those with well-performing CIs. "Sound" photo stimuli evoked a stronger N1 than "nonsound" photo stimuli. Further analysis showed that only at FC4 in deaf subjects and those with poorly performing CIs were the N1 responses to "sound" photo stimuli stronger than those to "nonsound" photo stimuli. No significant difference was found for the FC3 and FC4 areas. No significant difference was found in N1 latencies and P1 amplitudes or latencies. The results indicate enhanced visual recruitment of the auditory cortices in prelingually deaf children. Additionally, the decrement in visual recruitment of auditory cortices was related to good CI outcomes.

  19. Integrating Information from Different Senses in the Auditory Cortex

    PubMed Central

    King, Andrew J.; Walker, Kerry M.M.

    2015-01-01

    Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035

  20. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy.

    PubMed

    Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H

    2018-05-02

    A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Auditory perception in the aging brain: the role of inhibition and facilitation in early processing.

    PubMed

    Stothart, George; Kazanina, Nina

    2016-11-01

    Aging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy

    PubMed Central

    Skouras, Stavros; Lohmann, Gabriele

    2018-01-01

    Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142

  3. Neural circuits in Auditory and Audiovisual Memory

    PubMed Central

    Plakke, B.; Romanski, L.M.

    2016-01-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. PMID:26656069

  4. Neural circuits in auditory and audiovisual memory.

    PubMed

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.

    PubMed

    Meredith, M Alex; Allman, Brian L

    2015-03-01

    The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Auditory cortical activity during cochlear implant-mediated perception of spoken language, melody, and rhythm.

    PubMed

    Limb, Charles J; Molloy, Anne T; Jiradejvong, Patpong; Braun, Allen R

    2010-03-01

    Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H(2) (15)O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study.

  7. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  8. Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha

    PubMed Central

    Kayser, Stephanie J.; Ince, Robin A.A.; Gross, Joachim

    2015-01-01

    The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms. PMID:26538641

  9. The effect of early visual deprivation on the neural bases of multisensory processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Oxytocin Enables Maternal Behavior by Balancing Cortical Inhibition

    PubMed Central

    Marlin, Bianca J.; Mitre, Mariela; D’amour, James A.; Chao, Moses V.; Froemke, Robert C.

    2015-01-01

    Oxytocin is important for social interactions and maternal behavior. However, little is known about when, where, and how oxytocin modulates neural circuits to improve social cognition. Here we show how oxytocin enables pup retrieval behavior in female mice by enhancing auditory cortical pup call responses. Retrieval behavior required left but not right auditory cortex, was accelerated by oxytocin in left auditory cortex, and oxytocin receptors were preferentially expressed in left auditory cortex. Neural responses to pup calls were lateralized, with co-tuned and temporally-precise excitatory and inhibitory responses in left cortex of maternal but not pup-naive adults. Finally, pairing calls with oxytocin enhanced responses by balancing the magnitude and timing of inhibition with excitation. Our results describe fundamental synaptic mechanisms by which oxytocin increases the salience of acoustic social stimuli. Furthermore, oxytocin-induced plasticity provides a biological basis for lateralization of auditory cortical processing. PMID:25874674

  11. Auditory hallucinations and the temporal cortical response to speech in schizophrenia: a functional magnetic resonance imaging study.

    PubMed

    Woodruff, P W; Wright, I C; Bullmore, E T; Brammer, M; Howard, R J; Williams, S C; Shapleske, J; Rossell, S; David, A S; McGuire, P K; Murray, R M

    1997-12-01

    The authors explored whether abnormal functional lateralization of temporal cortical language areas in schizophrenia was associated with a predisposition to auditory hallucinations and whether the auditory hallucinatory state would reduce the temporal cortical response to external speech. Functional magnetic resonance imaging was used to measure the blood-oxygenation-level-dependent signal induced by auditory perception of speech in three groups of male subjects: eight schizophrenic patients with a history of auditory hallucinations (trait-positive), none of whom was currently hallucinating; seven schizophrenic patients without such a history (trait-negative); and eight healthy volunteers. Seven schizophrenic patients were also examined while they were actually experiencing severe auditory verbal hallucinations and again after their hallucinations had diminished. Voxel-by-voxel comparison of the median power of subjects' responses to periodic external speech revealed that this measure was reduced in the left superior temporal gyrus but increased in the right middle temporal gyrus in the combined schizophrenic groups relative to the healthy comparison group. Comparison of the trait-positive and trait-negative patients revealed no clear difference in the power of temporal cortical activation. Comparison of patients when experiencing severe hallucinations and when hallucinations were mild revealed reduced responsivity of the temporal cortex, especially the right middle temporal gyrus, to external speech during the former state. These results suggest that schizophrenia is associated with a reduced left and increased right temporal cortical response to auditory perception of speech, with little distinction between patients who differ in their vulnerability to hallucinations. The auditory hallucinatory state is associated with reduced activity in temporal cortical regions that overlap with those that normally process external speech, possibly because of competition for common neurophysiological resources.

  12. Unraveling the principles of auditory cortical processing: can we learn from the visual system?

    PubMed Central

    King, Andrew J; Nelken, Israel

    2013-01-01

    Studies of auditory cortex are often driven by the assumption, derived from our better understanding of visual cortex, that basic physical properties of sounds are represented there before being used by higher-level areas for determining sound-source identity and location. However, we only have a limited appreciation of what the cortex adds to the extensive subcortical processing of auditory information, which can account for many perceptual abilities. This is partly because of the approaches that have dominated the study of auditory cortical processing to date, and future progress will unquestionably profit from the adoption of methods that have provided valuable insights into the neural basis of visual perception. At the same time, we propose that there are unique operating principles employed by the auditory cortex that relate largely to the simultaneous and sequential processing of previously derived features and that therefore need to be studied and understood in their own right. PMID:19471268

  13. P50 Suppression in Children with Selective Mutism: A Preliminary Report

    ERIC Educational Resources Information Center

    Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair

    2010-01-01

    Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in…

  14. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.

    PubMed

    Poremba, Amy; Mishkin, Mortimer

    2007-07-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.

  15. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys

    PubMed Central

    Mishkin, Mortimer

    2009-01-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left hemisphere “dominance” during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole “dominance” appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703

  16. fMRI-based Multivariate Pattern Analyses Reveal Imagery Modality and Imagery Content Specific Representations in Primary Somatosensory, Motor and Auditory Cortices.

    PubMed

    de Borst, Aline W; de Gelder, Beatrice

    2017-08-01

    Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Modulation of Specific Sensory Cortical Areas by Segregated Basal Forebrain Cholinergic Neurons Demonstrated by Neuronal Tracing and Optogenetic Stimulation in Mice

    PubMed Central

    Chaves-Coira, Irene; Barros-Zulaica, Natali; Rodrigo-Angulo, Margarita; Núñez, Ángel

    2016-01-01

    Neocortical cholinergic activity plays a fundamental role in sensory processing and cognitive functions. Previous results have suggested a refined anatomical and functional topographical organization of basal forebrain (BF) projections that may control cortical sensory processing in a specific manner. We have used retrograde anatomical procedures to demonstrate the existence of specific neuronal groups in the BF involved in the control of specific sensory cortices. Fluoro-Gold (FlGo) and Fast Blue (FB) fluorescent retrograde tracers were deposited into the primary somatosensory (S1) and primary auditory (A1) cortices in mice. Our results revealed that the BF is a heterogeneous area in which neurons projecting to different cortical areas are segregated into different neuronal groups. Most of the neurons located in the horizontal limb of the diagonal band of Broca (HDB) projected to the S1 cortex, indicating that this area is specialized in the sensory processing of tactile stimuli. However, the nucleus basalis magnocellularis (B) nucleus shows a similar number of cells projecting to the S1 as to the A1 cortices. In addition, we analyzed the cholinergic effects on the S1 and A1 cortical sensory responses by optogenetic stimulation of the BF neurons in urethane-anesthetized transgenic mice. We used transgenic mice expressing the light-activated cation channel, channelrhodopsin-2, tagged with a fluorescent protein (ChR2-YFP) under the control of the choline-acetyl transferase promoter (ChAT). Cortical evoked potentials were induced by whisker deflections or by auditory clicks. According to the anatomical results, optogenetic HDB stimulation induced more extensive facilitation of tactile evoked potentials in S1 than auditory evoked potentials in A1, while optogenetic stimulation of the B nucleus facilitated either tactile or auditory evoked potentials equally. Consequently, our results suggest that cholinergic projections to the cortex are organized into segregated pools of neurons that may modulate specific cortical areas. PMID:27147975

  18. Modulation of Specific Sensory Cortical Areas by Segregated Basal Forebrain Cholinergic Neurons Demonstrated by Neuronal Tracing and Optogenetic Stimulation in Mice.

    PubMed

    Chaves-Coira, Irene; Barros-Zulaica, Natali; Rodrigo-Angulo, Margarita; Núñez, Ángel

    2016-01-01

    Neocortical cholinergic activity plays a fundamental role in sensory processing and cognitive functions. Previous results have suggested a refined anatomical and functional topographical organization of basal forebrain (BF) projections that may control cortical sensory processing in a specific manner. We have used retrograde anatomical procedures to demonstrate the existence of specific neuronal groups in the BF involved in the control of specific sensory cortices. Fluoro-Gold (FlGo) and Fast Blue (FB) fluorescent retrograde tracers were deposited into the primary somatosensory (S1) and primary auditory (A1) cortices in mice. Our results revealed that the BF is a heterogeneous area in which neurons projecting to different cortical areas are segregated into different neuronal groups. Most of the neurons located in the horizontal limb of the diagonal band of Broca (HDB) projected to the S1 cortex, indicating that this area is specialized in the sensory processing of tactile stimuli. However, the nucleus basalis magnocellularis (B) nucleus shows a similar number of cells projecting to the S1 as to the A1 cortices. In addition, we analyzed the cholinergic effects on the S1 and A1 cortical sensory responses by optogenetic stimulation of the BF neurons in urethane-anesthetized transgenic mice. We used transgenic mice expressing the light-activated cation channel, channelrhodopsin-2, tagged with a fluorescent protein (ChR2-YFP) under the control of the choline-acetyl transferase promoter (ChAT). Cortical evoked potentials were induced by whisker deflections or by auditory clicks. According to the anatomical results, optogenetic HDB stimulation induced more extensive facilitation of tactile evoked potentials in S1 than auditory evoked potentials in A1, while optogenetic stimulation of the B nucleus facilitated either tactile or auditory evoked potentials equally. Consequently, our results suggest that cholinergic projections to the cortex are organized into segregated pools of neurons that may modulate specific cortical areas.

  19. Switching auditory attention using spatial and non-spatial features recruits different cortical networks.

    PubMed

    Larson, Eric; Lee, Adrian K C

    2014-01-01

    Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. © 2013 Elsevier Inc. All rights reserved.

  20. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    PubMed

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  1. Intracerebral evidence of rhythm transform in the human auditory cortex.

    PubMed

    Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis

    2017-07-01

    Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (<30 Hz) and high-frequency (>30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.

  2. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    PubMed Central

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  3. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution.

    PubMed

    Hertz, Uri; Amedi, Amir

    2015-08-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.

  4. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A Circuit for Motor Cortical Modulation of Auditory Cortical Activity

    PubMed Central

    Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan

    2013-01-01

    Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287

  6. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  7. Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations

    PubMed Central

    Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.

    2009-01-01

    Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102

  8. Cell-specific gain modulation by synaptically released zinc in cortical circuits of audition.

    PubMed

    Anderson, Charles T; Kumar, Manoj; Xiong, Shanshan; Tzounopoulos, Thanos

    2017-09-09

    In many excitatory synapses, mobile zinc is found within glutamatergic vesicles and is coreleased with glutamate. Ex vivo studies established that synaptically released (synaptic) zinc inhibits excitatory neurotransmission at lower frequencies of synaptic activity but enhances steady state synaptic responses during higher frequencies of activity. However, it remains unknown how synaptic zinc affects neuronal processing in vivo. Here, we imaged the sound-evoked neuronal activity of the primary auditory cortex in awake mice. We discovered that synaptic zinc enhanced the gain of sound-evoked responses in CaMKII-expressing principal neurons, but it reduced the gain of parvalbumin- and somatostatin-expressing interneurons. This modulation was sound intensity-dependent and, in part, NMDA receptor-independent. By establishing a previously unknown link between synaptic zinc and gain control of auditory cortical processing, our findings advance understanding about cortical synaptic mechanisms and create a new framework for approaching and interpreting the role of the auditory cortex in sound processing.

  9. Fundamental deficits of auditory perception in Wernicke's aphasia.

    PubMed

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Plasticity of spatial hearing: behavioural effects of cortical inactivation

    PubMed Central

    Nodal, Fernando R; Bajo, Victoria M; King, Andrew J

    2012-01-01

    The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex. PMID:22547635

  11. Brainstem origins for cortical 'what' and 'where' pathways in the auditory system.

    PubMed

    Kraus, Nina; Nicol, Trent

    2005-04-01

    We have developed a data-driven conceptual framework that links two areas of science: the source-filter model of acoustics and cortical sensory processing streams. The source-filter model describes the mechanics behind speech production: the identity of the speaker is carried largely in the vocal cord source and the message is shaped by the ever-changing filters of the vocal tract. Sensory processing streams, popularly called 'what' and 'where' pathways, are well established in the visual system as a neural scheme for separately carrying different facets of visual objects, namely their identity and their position/motion, to the cortex. A similar functional organization has been postulated in the auditory system. Both speaker identity and the spoken message, which are simultaneously conveyed in the acoustic structure of speech, can be disentangled into discrete brainstem response components. We argue that these two response classes are early manifestations of auditory 'what' and 'where' streams in the cortex. This brainstem link forges a new understanding of the relationship between the acoustics of speech and cortical processing streams, unites two hitherto separate areas in science, and provides a model for future investigations of auditory function.

  12. Cortical neurons of bats respond best to echoes from nearest targets when listening to natural biosonar multi-echo streams.

    PubMed

    Beetz, M Jerome; Hechavarría, Julio C; Kössl, Manfred

    2016-10-27

    Bats orientate in darkness by listening to echoes from their biosonar calls, a behaviour known as echolocation. Recent studies showed that cortical neurons respond in a highly selective manner when stimulated with natural echolocation sequences that contain echoes from single targets. However, it remains unknown how cortical neurons process echolocation sequences containing echo information from multiple objects. In the present study, we used echolocation sequences containing echoes from three, two or one object separated in the space depth as stimuli to study neuronal activity in the bat auditory cortex. Neuronal activity was recorded with multi-electrode arrays placed in the dorsal auditory cortex, where neurons tuned to target-distance are found. Our results show that target-distance encoding neurons are mostly selective to echoes coming from the closest object, and that the representation of echo information from distant objects is selectively suppressed. This suppression extends over a large part of the dorsal auditory cortex and may override possible parallel processing of multiple objects. The presented data suggest that global cortical suppression might establish a cortical "default mode" that allows selectively focusing on close obstacle even without active attention from the animals.

  13. Cortical neurons of bats respond best to echoes from nearest targets when listening to natural biosonar multi-echo streams

    PubMed Central

    Beetz, M. Jerome; Hechavarría, Julio C.; Kössl, Manfred

    2016-01-01

    Bats orientate in darkness by listening to echoes from their biosonar calls, a behaviour known as echolocation. Recent studies showed that cortical neurons respond in a highly selective manner when stimulated with natural echolocation sequences that contain echoes from single targets. However, it remains unknown how cortical neurons process echolocation sequences containing echo information from multiple objects. In the present study, we used echolocation sequences containing echoes from three, two or one object separated in the space depth as stimuli to study neuronal activity in the bat auditory cortex. Neuronal activity was recorded with multi-electrode arrays placed in the dorsal auditory cortex, where neurons tuned to target-distance are found. Our results show that target-distance encoding neurons are mostly selective to echoes coming from the closest object, and that the representation of echo information from distant objects is selectively suppressed. This suppression extends over a large part of the dorsal auditory cortex and may override possible parallel processing of multiple objects. The presented data suggest that global cortical suppression might establish a cortical “default mode” that allows selectively focusing on close obstacle even without active attention from the animals. PMID:27786252

  14. The Non-Lemniscal Auditory Cortex in Ferrets: Convergence of Corticotectal Inputs in the Superior Colliculus

    PubMed Central

    Bajo, Victoria M.; Nodal, Fernando R.; Bizley, Jennifer K.; King, Andrew J.

    2010-01-01

    Descending cortical inputs to the superior colliculus (SC) contribute to the unisensory response properties of the neurons found there and are critical for multisensory integration. However, little is known about the relative contribution of different auditory cortical areas to this projection or the distribution of their terminals in the SC. We characterized this projection in the ferret by injecting tracers in the SC and auditory cortex. Large pyramidal neurons were labeled in layer V of different parts of the ectosylvian gyrus after tracer injections in the SC. Those cells were most numerous in the anterior ectosylvian gyrus (AEG), and particularly in the anterior ventral field, which receives both auditory and visual inputs. Labeling was also found in the posterior ectosylvian gyrus (PEG), predominantly in the tonotopically organized posterior suprasylvian field. Profuse anterograde labeling was present in the SC following tracer injections at the site of acoustically responsive neurons in the AEG or PEG, with terminal fields being both more prominent and clustered for inputs originating from the AEG. Terminals from both cortical areas were located throughout the intermediate and deep layers, but were most concentrated in the posterior half of the SC, where peripheral stimulus locations are represented. No inputs were identified from primary auditory cortical areas, although some labeling was found in the surrounding sulci. Our findings suggest that higher level auditory cortical areas, including those involved in multisensory processing, may modulate SC function via their projections into its deeper layers. PMID:20640247

  15. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  16. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  17. Coding principles of the canonical cortical microcircuit in the avian brain

    PubMed Central

    Calabrese, Ana; Woolley, Sarah M. N.

    2015-01-01

    Mammalian neocortex is characterized by a layered architecture and a common or “canonical” microcircuit governing information flow among layers. This microcircuit is thought to underlie the computations required for complex behavior. Despite the absence of a six-layered cortex, birds are capable of complex cognition and behavior. In addition, the avian auditory pallium is composed of adjacent information-processing regions with genetically identified neuron types and projections among regions comparable with those found in the neocortex. Here, we show that the avian auditory pallium exhibits the same information-processing principles that define the canonical cortical microcircuit, long thought to have evolved only in mammals. These results suggest that the canonical cortical microcircuit evolved in a common ancestor of mammals and birds and provide a physiological explanation for the evolution of neural processes that give rise to complex behavior in the absence of cortical lamination. PMID:25691736

  18. Atypical coordination of cortical oscillations in response to speech in autism

    PubMed Central

    Jochaut, Delphine; Lehongre, Katia; Saitovitch, Ana; Devauchelle, Anne-Dominique; Olasagasti, Itsaso; Chabane, Nadia; Zilbovicius, Monica; Giraud, Anne-Lise

    2015-01-01

    Subjects with autism often show language difficulties, but it is unclear how they relate to neurophysiological anomalies of cortical speech processing. We used combined EEG and fMRI in 13 subjects with autism and 13 control participants and show that in autism, gamma and theta cortical activity do not engage synergistically in response to speech. Theta activity in left auditory cortex fails to track speech modulations, and to down-regulate gamma oscillations in the group with autism. This deficit predicts the severity of both verbal impairment and autism symptoms in the affected sample. Finally, we found that oscillation-based connectivity between auditory and other language cortices is altered in autism. These results suggest that the verbal disorder in autism could be associated with an altered balance of slow and fast auditory oscillations, and that this anomaly could compromise the mapping between sensory input and higher-level cognitive representations. PMID:25870556

  19. Training of Working Memory Impacts Neural Processing of Vocal Pitch Regulation

    PubMed Central

    Li, Weifeng; Guo, Zhiqiang; Jones, Jeffery A.; Huang, Xiyan; Chen, Xi; Liu, Peng; Chen, Shaozhen; Liu, Hanjun

    2015-01-01

    Working memory training can improve the performance of tasks that were not trained. Whether auditory-motor integration for voice control can benefit from working memory training, however, remains unclear. The present event-related potential (ERP) study examined the impact of working memory training on the auditory-motor processing of vocal pitch. Trained participants underwent adaptive working memory training using a digit span backwards paradigm, while control participants did not receive any training. Before and after training, both trained and control participants were exposed to frequency-altered auditory feedback while producing vocalizations. After training, trained participants exhibited significantly decreased N1 amplitudes and increased P2 amplitudes in response to pitch errors in voice auditory feedback. In addition, there was a significant positive correlation between the degree of improvement in working memory capacity and the post-pre difference in P2 amplitudes. Training-related changes in the vocal compensation, however, were not observed. There was no systematic change in either vocal or cortical responses for control participants. These findings provide evidence that working memory training impacts the cortical processing of feedback errors in vocal pitch regulation. This enhanced cortical processing may be the result of increased neural efficiency in the detection of pitch errors between the intended and actual feedback. PMID:26553373

  20. Neuroanatomical and resting state EEG power correlates of central hearing loss in older adults.

    PubMed

    Giroud, Nathalie; Hirsiger, Sarah; Muri, Raphaela; Kegel, Andrea; Dillier, Norbert; Meyer, Martin

    2018-01-01

    To gain more insight into central hearing loss, we investigated the relationship between cortical thickness and surface area, speech-relevant resting state EEG power, and above-threshold auditory measures in older adults and younger controls. Twenty-three older adults and 13 younger controls were tested with an adaptive auditory test battery to measure not only traditional pure-tone thresholds, but also above individual thresholds of temporal and spectral processing. The participants' speech recognition in noise (SiN) was evaluated, and a T1-weighted MRI image obtained for each participant. We then determined the cortical thickness (CT) and mean cortical surface area (CSA) of auditory and higher speech-relevant regions of interest (ROIs) with FreeSurfer. Further, we obtained resting state EEG from all participants as well as data on the intrinsic theta and gamma power lateralization, the latter in accordance with predictions of the Asymmetric Sampling in Time hypothesis regarding speech processing (Poeppel, Speech Commun 41:245-255, 2003). Methodological steps involved the calculation of age-related differences in behavior, anatomy and EEG power lateralization, followed by multiple regressions with anatomical ROIs as predictors for auditory performance. We then determined anatomical regressors for theta and gamma lateralization, and further constructed all regressions to investigate age as a moderator variable. Behavioral results indicated that older adults performed worse in temporal and spectral auditory tasks, and in SiN, despite having normal peripheral hearing as signaled by the audiogram. These behavioral age-related distinctions were accompanied by lower CT in all ROIs, while CSA was not different between the two age groups. Age modulated the regressions specifically in right auditory areas, where a thicker cortex was associated with better auditory performance in older adults. Moreover, a thicker right supratemporal sulcus predicted more rightward theta lateralization, indicating the functional relevance of the right auditory areas in older adults. The question how age-related cortical thinning and intrinsic EEG architecture relates to central hearing loss has so far not been addressed. Here, we provide the first neuroanatomical and neurofunctional evidence that cortical thinning and lateralization of speech-relevant frequency band power relates to the extent of age-related central hearing loss in older adults. The results are discussed within the current frameworks of speech processing and aging.

  1. The cortical language circuit: from auditory perception to sentence comprehension.

    PubMed

    Friederici, Angela D

    2012-05-01

    Over the years, a large body of work on the brain basis of language comprehension has accumulated, paving the way for the formulation of a comprehensive model. The model proposed here describes the functional neuroanatomy of the different processing steps from auditory perception to comprehension as located in different gray matter brain regions. It also specifies the information flow between these regions, taking into account white matter fiber tract connections. Bottom-up, input-driven processes proceeding from the auditory cortex to the anterior superior temporal cortex and from there to the prefrontal cortex, as well as top-down, controlled and predictive processes from the prefrontal cortex back to the temporal cortex are proposed to constitute the cortical language circuit. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation

    PubMed Central

    Butler, Blake E.; Trainor, Laurel J.

    2012-01-01

    Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836

  3. Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V

    2013-11-15

    Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Abnormal auditory synchronization in stuttering: A magnetoencephalographic study.

    PubMed

    Kikuchi, Yoshikazu; Okamoto, Tsuyoshi; Ogata, Katsuya; Hagiwara, Koichi; Umezaki, Toshiro; Kenjo, Masamutsu; Nakagawa, Takashi; Tobimatsu, Shozo

    2017-02-01

    In a previous magnetoencephalographic study, we showed both functional and structural reorganization of the right auditory cortex and impaired left auditory cortex function in people who stutter (PWS). In the present work, we reevaluated the same dataset to further investigate how the right and left auditory cortices interact to compensate for stuttering. We evaluated bilateral N100m latencies as well as indices of local and inter-hemispheric phase synchronization of the auditory cortices. The left N100m latency was significantly prolonged relative to the right N100m latency in PWS, while healthy control participants did not show any inter-hemispheric differences in latency. A phase-locking factor (PLF) analysis, which indicates the degree of local phase synchronization, demonstrated enhanced alpha-band synchrony in the right auditory area of PWS. A phase-locking value (PLV) analysis of inter-hemispheric synchronization demonstrated significant elevations in the beta band between the right and left auditory cortices in PWS. In addition, right PLF and PLVs were positively correlated with stuttering frequency in PWS. Taken together, our data suggest that increased right hemispheric local phase synchronization and increased inter-hemispheric phase synchronization are electrophysiological correlates of a compensatory mechanism for impaired left auditory processing in PWS. Published by Elsevier B.V.

  5. Hearing with Two Ears: Evidence for Cortical Binaural Interaction during Auditory Processing.

    PubMed

    Henkin, Yael; Yaar-Soffer, Yifat; Givon, Lihi; Hildesheimer, Minka

    2015-04-01

    Integration of information presented to the two ears has been shown to manifest in binaural interaction components (BICs) that occur along the ascending auditory pathways. In humans, BICs have been studied predominantly at the brainstem and thalamocortical levels; however, understanding of higher cortically driven mechanisms of binaural hearing is limited. To explore whether BICs are evident in auditory event-related potentials (AERPs) during the advanced perceptual and postperceptual stages of cortical processing. The AERPs N1, P3, and a late negative component (LNC) were recorded from multiple site electrodes while participants performed an oddball discrimination task that consisted of natural speech syllables (/ka/ vs. /ta/) that differed by place-of-articulation. Participants were instructed to respond to the target stimulus (/ta/) while performing the task in three listening conditions: monaural right, monaural left, and binaural. Fifteen (21-32 yr) young adults (6 females) with normal hearing sensitivity. By subtracting the response to target stimuli elicited in the binaural condition from the sum of responses elicited in the monaural right and left conditions, the BIC waveform was derived and the latencies and amplitudes of the components were measured. The maximal interaction was calculated by dividing BIC amplitude by the summed right and left response amplitudes. In addition, the latencies and amplitudes of the AERPs to target stimuli elicited in the monaural right, monaural left, and binaural listening conditions were measured and subjected to analysis of variance with repeated measures testing the effect of listening condition and laterality. Three consecutive BICs were identified at a mean latency of 129, 406, and 554 msec, and were labeled N1-BIC, P3-BIC, and LNC-BIC, respectively. Maximal interaction increased significantly with progression of auditory processing from perceptual to postperceptual stages and amounted to 51%, 55%, and 75% of the sum of monaural responses for N1-BIC, P3-BIC, and LNC-BIC, respectively. Binaural interaction manifested in a decrease of the binaural response compared to the sum of monaural responses. Furthermore, listening condition affected P3 latency only, whereas laterality effects manifested in enhanced N1 amplitudes at the left (T3) vs. right (T4) scalp electrode and in a greater left-right amplitude difference in the right compared to left listening condition. The current AERP data provides evidence for the occurrence of cortical BICs during perceptual and postperceptual stages, presumably reflecting ongoing integration of information presented to the two ears at the final stages of auditory processing. Increasing binaural interaction with the progression of the auditory processing sequence (N1 to LNC) may support the notion that cortical BICs reflect inherited interactions from preceding stages of upstream processing together with discrete cortical neural activity involved in binaural processing. Clinically, an objective measure of cortical binaural processing has the potential of becoming an appealing neural correlate of binaural behavioral performance. American Academy of Audiology.

  6. Interdependent encoding of pitch, timbre and spatial location in auditory cortex

    PubMed Central

    Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.

    2009-01-01

    Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960

  7. Anatomy of the auditory thalamocortical system in the Mongolian gerbil: nuclear origins and cortical field-, layer-, and frequency-specificities.

    PubMed

    Saldeitis, Katja; Happel, Max F K; Ohl, Frank W; Scheich, Henning; Budinger, Eike

    2014-07-01

    Knowledge of the anatomical organization of the auditory thalamocortical (TC) system is fundamental for the understanding of auditory information processing in the brain. In the Mongolian gerbil (Meriones unguiculatus), a valuable model species in auditory research, the detailed anatomy of this system has not yet been worked out in detail. Here, we investigated the projections from the three subnuclei of the medial geniculate body (MGB), namely, its ventral (MGv), dorsal (MGd), and medial (MGm) divisions, as well as from several of their subdivisions (MGv: pars lateralis [LV], pars ovoidea [OV], rostral pole [RP]; MGd: deep dorsal nucleus [DD]), to the auditory cortex (AC) by stereotaxic pressure injections and electrophysiologically guided iontophoretic injections of the anterograde tract tracer biocytin. Our data reveal highly specific features of the TC connections regarding their nuclear origin in the subdivisions of the MGB and their termination patterns in the auditory cortical fields and layers. In addition to tonotopically organized projections, primarily of the LV, OV, and DD to the AC, a large number of axons diverge across the tonotopic gradient. These originate mainly from the RP, MGd (proper), and MGm. In particular, neurons of the MGm project in a columnar fashion to several auditory fields, forming small- and medium-sized boutons, and also hitherto unknown giant terminals. The distinctive layer-specific distribution of axonal endings within the AC indicates that each of the TC connectivity systems has a specific function in auditory cortical processing. Copyright © 2014 Wiley Periodicals, Inc.

  8. A bilateral cortical network responds to pitch perturbations in speech feedback

    PubMed Central

    Kort, Naomi S.; Nagarajan, Srikantan S.; Houde, John F.

    2014-01-01

    Auditory feedback is used to monitor and correct for errors in speech production, and one of the clearest demonstrations of this is the pitch perturbation reflex. During ongoing phonation, speakers respond rapidly to shifts of the pitch of their auditory feedback, altering their pitch production to oppose the direction of the applied pitch shift. In this study, we examine the timing of activity within a network of brain regions thought to be involved in mediating this behavior. To isolate auditory feedback processing relevant for motor control of speech, we used magnetoencephalography (MEG) to compare neural responses to speech onset and to transient (400ms) pitch feedback perturbations during speaking with responses to identical acoustic stimuli during passive listening. We found overlapping, but distinct bilateral cortical networks involved in monitoring speech onset and feedback alterations in ongoing speech. Responses to speech onset during speaking were suppressed in bilateral auditory and left ventral supramarginal gyrus/posterior superior temporal sulcus (vSMG/pSTS). In contrast, during pitch perturbations, activity was enhanced in bilateral vSMG/pSTS, bilateral premotor cortex, right primary auditory cortex, and left higher order auditory cortex. We also found speaking-induced delays in responses to both unaltered and altered speech in bilateral primary and secondary auditory regions, the left vSMG/pSTS and right premotor cortex. The network dynamics reveal the cortical processing involved in both detecting the speech error and updating the motor plan to create the new pitch output. These results implicate vSMG/pSTS as critical in both monitoring auditory feedback and initiating rapid compensation to feedback errors. PMID:24076223

  9. Representations of Pitch and Timbre Variation in Human Auditory Cortex

    PubMed Central

    2017-01-01

    Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions. PMID:28025255

  10. Cortical oscillations related to processing congruent and incongruent grapheme-phoneme pairs.

    PubMed

    Herdman, Anthony T; Fujioka, Takako; Chau, Wilkin; Ross, Bernhard; Pantev, Christo; Picton, Terence W

    2006-05-15

    In this study, we investigated changes in cortical oscillations following congruent and incongruent grapheme-phoneme stimuli. Hiragana graphemes and phonemes were simultaneously presented as congruent or incongruent audiovisual stimuli to native Japanese-speaking participants. The discriminative reaction time was 57 ms shorter for congruent than incongruent stimuli. Analysis of MEG responses using synthetic aperture magnetometry (SAM) revealed that congruent stimuli evoked larger 2-10 Hz activity in the left auditory cortex within the first 250 ms after stimulus onset, and smaller 2-16 Hz activity in bilateral visual cortices between 250 and 500 ms. These results indicate that congruent visual input can modify cortical activity in the left auditory cortex.

  11. Association between heart rhythm and cortical sound processing.

    PubMed

    Marcomini, Renata S; Frizzo, Ana Claúdia F; de Góes, Viviane B; Regaçone, Simone F; Garner, David M; Raimundo, Rodrigo D; Oliveira, Fernando R; Valenti, Vitor E

    2018-04-26

    Sound signal processing signifies an important factor for human conscious communication and it may be assessed through cortical auditory evoked potentials (CAEP). Heart rate variability (HRV) provides information about heart rate autonomic regulation. We investigated the association between resting HRV and CAEP. We evaluated resting HRV in the time and frequency domain and the CAEP components. The subjects remained at rest for 10 minutes for HRV recording, then they performed the CAEP examinations through frequency and duration protocols in both ears. Linear regression indicated that the amplitude of the N2 wave of the CAEP in the left ear (not right ear) was significantly influenced by standard deviation of normal-to-normal RR-intervals (17.7%) and percentage of adjacent RR-intervals with a difference of duration greater than 50 milliseconds (25.3%) time domain HRV indices in the frequency protocol. In the duration protocol and in the left ear the latency of the P2 wave was significantly influenced by low (LF) (20.8%) and high frequency (HF) bands in normalized units (21%) and LF/HF ratio (22.4%) indices of HRV spectral analysis. The latency of the N2 wave was significantly influenced by LF (25.8%), HF (25.9%) and LF/HF (28.8%). In conclusion, we promote the supposition that resting heart rhythm is associated with thalamo-cortical, cortical-cortical and auditory cortex pathways involved with auditory processing in the right hemisphere.

  12. Information-Processing Modules and Their Relative Modality Specificity

    ERIC Educational Resources Information Center

    Anderson, John R.; Qin, Yulin; Jung, Kwan-Jin; Carter, Cameron S.

    2007-01-01

    This research uses fMRI to understand the role of eight cortical regions in a relatively complex information-processing task. Modality of input (visual versus auditory) and modality of output (manual versus vocal) are manipulated. Two perceptual regions (auditory cortex and fusiform gyrus) only reflected perceptual encoding. Two motor regions were…

  13. Subcortical encoding of sound is enhanced in bilinguals and relates to executive function advantages

    PubMed Central

    Krizman, Jennifer; Marian, Viorica; Shook, Anthony; Skoe, Erika; Kraus, Nina

    2012-01-01

    Bilingualism profoundly affects the brain, yielding functional and structural changes in cortical regions dedicated to language processing and executive function [Crinion J, et al. (2006) Science 312:1537–1540; Kim KHS, et al. (1997) Nature 388:171–174]. Comparatively, musical training, another type of sensory enrichment, translates to expertise in cognitive processing and refined biological processing of sound in both cortical and subcortical structures. Therefore, we asked whether bilingualism can also promote experience-dependent plasticity in subcortical auditory processing. We found that adolescent bilinguals, listening to the speech syllable [da], encoded the stimulus more robustly than age-matched monolinguals. Specifically, bilinguals showed enhanced encoding of the fundamental frequency, a feature known to underlie pitch perception and grouping of auditory objects. This enhancement was associated with executive function advantages. Thus, through experience-related tuning of attention, the bilingual auditory system becomes highly efficient in automatically processing sound. This study provides biological evidence for system-wide neural plasticity in auditory experts that facilitates a tight coupling of sensory and cognitive functions. PMID:22547804

  14. Binaural beats increase interhemispheric alpha-band coherence between auditory cortices.

    PubMed

    Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G

    2016-02-01

    Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    PubMed

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  16. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia

    PubMed Central

    Dale, Corby L.; Brown, Ethan G.; Fisher, Melissa; Herman, Alexander B.; Dowling, Anne F.; Hinkley, Leighton B.; Subramaniam, Karuna; Nagarajan, Srikantan S.; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  17. Cortical reorganization in postlingually deaf cochlear implant users: Intra-modal and cross-modal considerations.

    PubMed

    Stropahl, Maren; Chen, Ling-Chia; Debener, Stefan

    2017-01-01

    With the advances of cochlear implant (CI) technology, many deaf individuals can partially regain their hearing ability. However, there is a large variation in the level of recovery. Cortical changes induced by hearing deprivation and restoration with CIs have been thought to contribute to this variation. The current review aims to identify these cortical changes in postlingually deaf CI users and discusses their maladaptive or adaptive relationship to the CI outcome. Overall, intra-modal and cross-modal reorganization patterns have been identified in postlingually deaf CI users in visual and in auditory cortex. Even though cross-modal activation in auditory cortex is considered as maladaptive for speech recovery in CI users, a similar activation relates positively to lip reading skills. Furthermore, cross-modal activation of the visual cortex seems to be adaptive for speech recognition. Currently available evidence points to an involvement of further brain areas and suggests that a focus on the reversal of visual take-over of the auditory cortex may be too limited. Future investigations should consider expanded cortical as well as multi-sensory processing and capture different hierarchical processing steps. Furthermore, prospective longitudinal designs are needed to track the dynamics of cortical plasticity that takes place before and after implantation. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Auditory Neuroscience: Temporal Anticipation Enhances Cortical Processing

    PubMed Central

    Walker, Kerry M. M.; King, Andrew J.

    2015-01-01

    Summary A recent study shows that expectation about the timing of behaviorally-relevant sounds enhances the responses of neurons in the primary auditory cortex and improves the accuracy and speed with which animals respond to those sounds. PMID:21481759

  19. Transmodal comparison of auditory, motor, and visual post-processing with and without intentional short-term memory maintenance.

    PubMed

    Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias

    2010-12-01

    To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions

    PubMed Central

    Intskirveli, Irakli

    2017-01-01

    Abstract Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral “notch” of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing. PMID:28660244

  1. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions.

    PubMed

    Askew, Caitlin; Intskirveli, Irakli; Metherate, Raju

    2017-01-01

    Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral "notch" of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing.

  2. How do neurons work together? Lessons from auditory cortex.

    PubMed

    Harris, Kenneth D; Bartho, Peter; Chadderton, Paul; Curto, Carina; de la Rocha, Jaime; Hollender, Liad; Itskov, Vladimir; Luczak, Artur; Marguet, Stephan L; Renart, Alfonso; Sakata, Shuzo

    2011-01-01

    Recordings of single neurons have yielded great insights into the way acoustic stimuli are represented in auditory cortex. However, any one neuron functions as part of a population whose combined activity underlies cortical information processing. Here we review some results obtained by recording simultaneously from auditory cortical populations and individual morphologically identified neurons, in urethane-anesthetized and unanesthetized passively listening rats. Auditory cortical populations produced structured activity patterns both in response to acoustic stimuli, and spontaneously without sensory input. Population spike time patterns were broadly conserved across multiple sensory stimuli and spontaneous events, exhibiting a generally conserved sequential organization lasting approximately 100 ms. Both spontaneous and evoked events exhibited sparse, spatially localized activity in layer 2/3 pyramidal cells, and densely distributed activity in larger layer 5 pyramidal cells and putative interneurons. Laminar propagation differed however, with spontaneous activity spreading upward from deep layers and slowly across columns, but sensory responses initiating in presumptive thalamorecipient layers, spreading rapidly across columns. In both unanesthetized and urethanized rats, global activity fluctuated between "desynchronized" state characterized by low amplitude, high-frequency local field potentials and a "synchronized" state of larger, lower-frequency waves. Computational studies suggested that responses could be predicted by a simple dynamical system model fitted to the spontaneous activity immediately preceding stimulus presentation. Fitting this model to the data yielded a nonlinear self-exciting system model in synchronized states and an approximately linear system in desynchronized states. We comment on the significance of these results for auditory cortical processing of acoustic and non-acoustic information. © 2010 Elsevier B.V. All rights reserved.

  3. Blocking c-Fos Expression Reveals the Role of Auditory Cortex Plasticity in Sound Frequency Discrimination Learning.

    PubMed

    de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek

    2018-05-01

    The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.

  4. Tuning Shifts of the Auditory System By Corticocortical and Corticofugal Projections and Conditioning

    PubMed Central

    Suga, Nobuo

    2011-01-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation -- comparable to repetitive tonal stimulation -- of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a “differential” gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning. PMID:22155273

  5. Quantitative analysis of neuronal response properties in primary and higher-order auditory cortical fields of awake house mice (Mus musculus)

    PubMed Central

    Joachimsthaler, Bettina; Uhlmann, Michaela; Miller, Frank; Ehret, Günter; Kurt, Simone

    2014-01-01

    Because of its great genetic potential, the mouse (Mus musculus) has become a popular model species for studies on hearing and sound processing along the auditory pathways. Here, we present the first comparative study on the representation of neuronal response parameters to tones in primary and higher-order auditory cortical fields of awake mice. We quantified 12 neuronal properties of tone processing in order to estimate similarities and differences of function between the fields, and to discuss how far auditory cortex (AC) function in the mouse is comparable to that in awake monkeys and cats. Extracellular recordings were made from 1400 small clusters of neurons from cortical layers III/IV in the primary fields AI (primary auditory field) and AAF (anterior auditory field), and the higher-order fields AII (second auditory field) and DP (dorsoposterior field). Field specificity was shown with regard to spontaneous activity, correlation between spontaneous and evoked activity, tone response latency, sharpness of frequency tuning, temporal response patterns (occurrence of phasic responses, phasic-tonic responses, tonic responses, and off-responses), and degree of variation between the characteristic frequency (CF) and the best frequency (BF) (CF–BF relationship). Field similarities were noted as significant correlations between CFs and BFs, V-shaped frequency tuning curves, similar minimum response thresholds and non-monotonic rate-level functions in approximately two-thirds of the neurons. Comparative and quantitative analyses showed that the measured response characteristics were, to various degrees, susceptible to influences of anesthetics. Therefore, studies of neuronal responses in the awake AC are important in order to establish adequate relationships between neuronal data and auditory perception and acoustic response behavior. PMID:24506843

  6. Cross-modal plasticity in developmental and age-related hearing loss: Clinical implications.

    PubMed

    Glick, Hannah; Sharma, Anu

    2017-01-01

    This review explores cross-modal cortical plasticity as a result of auditory deprivation in populations with hearing loss across the age spectrum, from development to adulthood. Cross-modal plasticity refers to the phenomenon when deprivation in one sensory modality (e.g. the auditory modality as in deafness or hearing loss) results in the recruitment of cortical resources of the deprived modality by intact sensory modalities (e.g. visual or somatosensory systems). We discuss recruitment of auditory cortical resources for visual and somatosensory processing in deafness and in lesser degrees of hearing loss. We describe developmental cross-modal re-organization in the context of congenital or pre-lingual deafness in childhood and in the context of adult-onset, age-related hearing loss, with a focus on how cross-modal plasticity relates to clinical outcomes. We provide both single-subject and group-level evidence of cross-modal re-organization by the visual and somatosensory systems in bilateral, congenital deafness, single-sided deafness, adults with early-stage, mild-moderate hearing loss, and individual adult and pediatric patients exhibit excellent and average speech perception with hearing aids and cochlear implants. We discuss a framework in which changes in cortical resource allocation secondary to hearing loss results in decreased intra-modal plasticity in auditory cortex, accompanied by increased cross-modal recruitment of auditory cortices by the other sensory systems, and simultaneous compensatory activation of frontal cortices. The frontal cortices, as we will discuss, play an important role in mediating cognitive compensation in hearing loss. Given the wide range of variability in behavioral performance following audiological intervention, changes in cortical plasticity may play a valuable role in the prediction of clinical outcomes following intervention. Further, the development of new technologies and rehabilitation strategies that incorporate brain-based biomarkers may help better serve hearing impaired populations across the lifespan. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Auditory Processing in Infancy: Do Early Abnormalities Predict Disorders of Language and Cognitive Development?

    ERIC Educational Resources Information Center

    Guzzetta, Francesco; Conti, Guido; Mercuri, Eugenio

    2011-01-01

    Increasing attention has been devoted to the maturation of sensory processing in the first year of life. While the development of cortical visual function has been thoroughly studied, much less information is available on auditory processing and its early disorders. The aim of this paper is to provide an overview of the assessment techniques for…

  8. Auditory and visual cortex of primates: a comparison of two sensory systems

    PubMed Central

    Rauschecker, Josef P.

    2014-01-01

    A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177

  9. Self-monitoring in the cerebral cortex: Neural responses to small pitch shifts in auditory feedback during speech production.

    PubMed

    Franken, Matthias K; Eisner, Frank; Acheson, Daniel J; McQueen, James M; Hagoort, Peter; Schoffelen, Jan-Mathijs

    2018-06-21

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Evidence of degraded representation of speech in noise, in the aging midbrain and cortex

    PubMed Central

    Simon, Jonathan Z.; Anderson, Samira

    2016-01-01

    Humans have a remarkable ability to track and understand speech in unfavorable conditions, such as in background noise, but speech understanding in noise does deteriorate with age. Results from several studies have shown that in younger adults, low-frequency auditory cortical activity reliably synchronizes to the speech envelope, even when the background noise is considerably louder than the speech signal. However, cortical speech processing may be limited by age-related decreases in the precision of neural synchronization in the midbrain. To understand better the neural mechanisms contributing to impaired speech perception in older adults, we investigated how aging affects midbrain and cortical encoding of speech when presented in quiet and in the presence of a single-competing talker. Our results suggest that central auditory temporal processing deficits in older adults manifest in both the midbrain and in the cortex. Specifically, midbrain frequency following responses to a speech syllable are more degraded in noise in older adults than in younger adults. This suggests a failure of the midbrain auditory mechanisms needed to compensate for the presence of a competing talker. Similarly, in cortical responses, older adults show larger reductions than younger adults in their ability to encode the speech envelope when a competing talker is added. Interestingly, older adults showed an exaggerated cortical representation of speech in both quiet and noise conditions, suggesting a possible imbalance between inhibitory and excitatory processes, or diminished network connectivity that may impair their ability to encode speech efficiently. PMID:27535374

  11. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture

    PubMed Central

    2017-01-01

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238

  12. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    PubMed

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  13. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    PubMed

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Pure word deafness following left temporal damage: Behavioral and neuroanatomical evidence from a new case.

    PubMed

    Maffei, Chiara; Capasso, Rita; Cazzolli, Giulia; Colosimo, Cesare; Dell'Acqua, Flavio; Piludu, Francesca; Catani, Marco; Miceli, Gabriele

    2017-12-01

    Pure Word Deafness (PWD) is a rare disorder, characterized by selective loss of speech input processing. Its most common cause is temporal damage to the primary auditory cortex of both hemispheres, but it has been reported also following unilateral lesions. In unilateral cases, PWD has been attributed to the disconnection of Wernicke's area from both right and left primary auditory cortex. Here we report behavioral and neuroimaging evidence from a new case of left unilateral PWD with both cortical and white matter damage due to a relatively small stroke lesion in the left temporal gyrus. Selective impairment in auditory language processing was accompanied by intact processing of nonspeech sounds and normal speech, reading and writing. Performance on dichotic listening was characterized by a reversal of the right-ear advantage typically observed in healthy subjects. Cortical thickness and gyral volume were severely reduced in the left superior temporal gyrus (STG), although abnormalities were not uniformly distributed and residual intact cortical areas were detected, for example in the medial portion of the Heschl's gyrus. Diffusion tractography documented partial damage to the acoustic radiations (AR), callosal temporal connections and intralobar tracts dedicated to single words comprehension. Behavioral and neuroimaging results in this case are difficult to integrate in a pure cortical or disconnection framework, as damage to primary auditory cortex in the left STG was only partial and Wernicke's area was not completely isolated from left or right-hemisphere input. On the basis of our findings we suggest that in this case of PWD, concurrent partial topological (cortical) and disconnection mechanisms have contributed to a selective impairment of speech sounds. The discrepancy between speech and non-speech sounds suggests selective damage to a language-specific left lateralized network involved in phoneme processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Representations of Spectral Differences between Vowels in Tonotopic Regions of Auditory Cortex

    ERIC Educational Resources Information Center

    Fisher, Julia

    2017-01-01

    This work examines the link between low-level cortical acoustic processing and higher-level cortical phonemic processing. Specifically, using functional magnetic resonance imaging, it looks at 1) whether or not the vowels [alpha] and [i] are distinguishable in regions of interest defined by the first two resonant frequencies (formants) of those…

  16. Matrix metalloproteinase-9 deletion rescues auditory evoked potential habituation deficit in a mouse model of Fragile X Syndrome

    PubMed Central

    Lovelace, Jonathan W.; Wen, Teresa H.; Reinhard, Sarah; Hsu, Mike S.; Sidhu, Harpreet; Ethell, Iryna M.; Binder, Devin K.; Razak, Khaleel A.

    2016-01-01

    Sensory processing deficits are common in autism spectrum disorders, but the underlying mechanisms are unclear. Fragile X Syndrome (FXS) is a leading genetic cause of intellectual disability and autism. Electrophysiological responses in humans with FXS show reduced habituation with sound repetition and this deficit may underlie auditory hypersensitivity in FXS. Our previous study in Fmr1 knockout (KO) mice revealed an unusually long state of increased sound-driven excitability in auditory cortical neurons suggesting that cortical responses to repeated sounds may exhibit abnormal habituation as in humans with FXS. Here, we tested this prediction by comparing cortical event related potentials (ERP) recorded from wildtype (WT) and Fmr1 KO mice. We report a repetition-rate dependent reduction in habituation of N1 amplitude in Fmr1 KO mice and show that matrix metalloproteinase −9 (MMP-9), one of the known FMRP targets, contributes to the reduced ERP habituation. Our studies demonstrate a significant up-regulation of MMP-9 levels in the auditory cortex of adult Fmr1 KO mice, whereas a genetic deletion of Mmp-9 reverses ERP habituation deficits in Fmr1 KO mice. Although the N1 amplitude of Mmp-9/Fmr1 DKO recordings was larger than WT and KO recordings, the habituation of ERPs in Mmp-9/Fmr1 DKO mice is similar to WT mice implicating MMP-9 as a potential target for reversing sensory processing deficits in FXS. Together these data establish ERP habituation as a translation relevant, physiological pre-clinical marker of auditory processing deficits in FXS and suggest that abnormal MMP-9 regulation is a mechanism underlying auditory hypersensitivity in FXS. PMID:26850918

  17. Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing.

    PubMed

    Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin

    2014-08-15

    Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  19. Specialization of the auditory system for the processing of bio-sonar information in the frequency domain: Mustached bats.

    PubMed

    Suga, Nobuo

    2018-04-01

    For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Central Auditory Maturation and Behavioral Outcome in Children with Auditory Neuropathy Spectrum Disorder who Use Cochlear Implants

    PubMed Central

    Cardon, Garrett; Sharma, Anu

    2013-01-01

    Objective We examined cortical auditory development and behavioral outcomes in children with ANSD fitted with cochlear implants (CI). Design Cortical maturation, measured by P1 cortical auditory evoked potential (CAEP) latency, was regressed against scores on the Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS). Implantation age was also considered in relation to CAEP findings. Study Sample Cross-sectional and longitudinal samples of 24 and 11 children, respectively, with ANSD fitted with CIs. Result P1 CAEP responses were present in all children after implantation, though previous findings suggest that only 50-75% of ANSD children with hearing aids show CAEP responses. P1 CAEP latency was significantly correlated with participants' IT-MAIS scores. Furthermore, more children implanted before age two years showed normal P1 latencies, while those implanted later mainly showed delayed latencies. Longitudinal analysis revealed that most children showed normal or improved cortical maturation after implantation. Conclusion Cochlear implantation resulted in measureable cortical auditory development for all children with ANSD. Children fitted with CIs under age two years were more likely to show age-appropriate CAEP responses within 6 months after implantation, suggesting a possible sensitive period for cortical auditory development in ANSD. That CAEP responses were correlated with behavioral outcome highlights their clinical decision-making utility. PMID:23819618

  2. Active listening: task-dependent plasticity of spectrotemporal receptive fields in primary auditory cortex.

    PubMed

    Fritz, Jonathan; Elhilali, Mounya; Shamma, Shihab

    2005-08-01

    Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.

  3. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    PubMed

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information.

  4. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE

    PubMed Central

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2015-01-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information. PMID:25838636

  5. Transcranial fluorescence imaging of auditory cortical plasticity regulated by acoustic environments in mice.

    PubMed

    Takahashi, Kuniyuki; Hishida, Ryuichi; Kubota, Yamato; Kudoh, Masaharu; Takahashi, Sugata; Shibuki, Katsuei

    2006-03-01

    Functional brain imaging using endogenous fluorescence of mitochondrial flavoprotein is useful for investigating mouse cortical activities via the intact skull, which is thin and sufficiently transparent in mice. We applied this method to investigate auditory cortical plasticity regulated by acoustic environments. Normal mice of the C57BL/6 strain, reared in various acoustic environments for at least 4 weeks after birth, were anaesthetized with urethane (1.7 g/kg, i.p.). Auditory cortical images of endogenous green fluorescence in blue light were recorded by a cooled CCD camera via the intact skull. Cortical responses elicited by tonal stimuli (5, 10 and 20 kHz) exhibited mirror-symmetrical tonotopic maps in the primary auditory cortex (AI) and anterior auditory field (AAF). Depression of auditory cortical responses regarding response duration was observed in sound-deprived mice compared with naïve mice reared in a normal acoustic environment. When mice were exposed to an environmental tonal stimulus at 10 kHz for more than 4 weeks after birth, the cortical responses were potentiated in a frequency-specific manner in respect to peak amplitude of the responses in AI, but not for the size of the responsive areas. Changes in AAF were less clear than those in AI. To determine the modified synapses by acoustic environments, neural responses in cortical slices were investigated with endogenous fluorescence imaging. The vertical thickness of responsive areas after supragranular electrical stimulation was significantly reduced in the slices obtained from sound-deprived mice. These results suggest that acoustic environments regulate the development of vertical intracortical circuits in the mouse auditory cortex.

  6. Effects of Aging and Adult-Onset Hearing Loss on Cortical Auditory Regions

    PubMed Central

    Cardin, Velia

    2016-01-01

    Hearing loss is a common feature in human aging. It has been argued that dysfunctions in central processing are important contributing factors to hearing loss during older age. Aging also has well documented consequences for neural structure and function, but it is not clear how these effects interact with those that arise as a consequence of hearing loss. This paper reviews the effects of aging and adult-onset hearing loss in the structure and function of cortical auditory regions. The evidence reviewed suggests that aging and hearing loss result in atrophy of cortical auditory regions and stronger engagement of networks involved in the detection of salient events, adaptive control and re-allocation of attention. These cortical mechanisms are engaged during listening in effortful conditions in normal hearing individuals. Therefore, as a consequence of aging and hearing loss, all listening becomes effortful and cognitive load is constantly high, reducing the amount of available cognitive resources. This constant effortful listening and reduced cognitive spare capacity could be what accelerates cognitive decline in older adults with hearing loss. PMID:27242405

  7. Thalamic and cortical pathways supporting auditory processing

    PubMed Central

    Lee, Charles C.

    2012-01-01

    The neural processing of auditory information engages pathways that begin initially at the cochlea and that eventually reach forebrain structures. At these higher levels, the computations necessary for extracting auditory source and identity information rely on the neuroanatomical connections between the thalamus and cortex. Here, the general organization of these connections in the medial geniculate body (thalamus) and the auditory cortex is reviewed. In addition, we consider two models organizing the thalamocortical pathways of the non-tonotopic and multimodal auditory nuclei. Overall, the transfer of information to the cortex via the thalamocortical pathways is complemented by the numerous intracortical and corticocortical pathways. Although interrelated, the convergent interactions among thalamocortical, corticocortical, and commissural pathways enable the computations necessary for the emergence of higher auditory perception. PMID:22728130

  8. Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.

    PubMed

    Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S

    2013-05-01

    Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Developmental and cross-modal plasticity in deafness: evidence from the P1 and N1 event related potentials in cochlear implanted children.

    PubMed

    Sharma, Anu; Campbell, Julia; Cardon, Garrett

    2015-02-01

    Cortical development is dependent on extrinsic stimulation. As such, sensory deprivation, as in congenital deafness, can dramatically alter functional connectivity and growth in the auditory system. Cochlear implants ameliorate deprivation-induced delays in maturation by directly stimulating the central nervous system, and thereby restoring auditory input. The scenario in which hearing is lost due to deafness and then reestablished via a cochlear implant provides a window into the development of the central auditory system. Converging evidence from electrophysiologic and brain imaging studies of deaf animals and children fitted with cochlear implants has allowed us to elucidate the details of the time course for auditory cortical maturation under conditions of deprivation. Here, we review how the P1 cortical auditory evoked potential (CAEP) provides useful insight into sensitive period cut-offs for development of the primary auditory cortex in deaf children fitted with cochlear implants. Additionally, we present new data on similar sensitive period dynamics in higher-order auditory cortices, as measured by the N1 CAEP in cochlear implant recipients. Furthermore, cortical re-organization, secondary to sensory deprivation, may take the form of compensatory cross-modal plasticity. We provide new case-study evidence that cross-modal re-organization, in which intact sensory modalities (i.e., vision and somatosensation) recruit cortical regions associated with deficient sensory modalities (i.e., auditory) in cochlear implanted children may influence their behavioral outcomes with the implant. Improvements in our understanding of developmental neuroplasticity in the auditory system should lead to harnessing central auditory plasticity for superior clinical technique. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    PubMed Central

    Sörös, Peter; Michael, Nikolaus; Tollkötter, Melanie; Pfleiderer, Bettina

    2006-01-01

    Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i) the amplitude of the N1m response and (ii) its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak) and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex. PMID:16884545

  11. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    PubMed

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  12. P50 suppression in children with selective mutism: a preliminary report.

    PubMed

    Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair

    2010-01-01

    Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in children with SM. The classic paired-click paradigm was utilized to assess suppression of the P50 event-related potential to the second, of two sequentially-presented clicks, in ten children with SM and 10 control children. A significant suppression of P50 to the second click was evident in the SM group, whereas no suppression effect was observed in controls. Suppression was evident in 90% of the SM group and in 40% of controls, whereas augmentation was found in 10% and 60%, respectively, yielding a significant association between group and suppression of P50. P50 to the first click was comparable in children with SM and controls. The adult-like, mature P50 suppression effect exhibited by children with SM may reflect a cortical mechanism of compensatory inhibition of irrelevant repetitive information that was not properly suppressed at lower levels of their auditory system. The current data extends our previous findings suggesting that differential auditory processing may be involved in speech selectivity in SM.

  13. Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing.

    PubMed

    Ross, Bernhard; Miyazaki, Takahiro; Thompson, Jessica; Jamali, Shahab; Fujioka, Takako

    2014-10-15

    When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations. Copyright © 2014 the American Physiological Society.

  14. Hierarchical neurocomputations underlying concurrent sound segregation: connecting periphery to percept.

    PubMed

    Bidelman, Gavin M; Alain, Claude

    2015-02-01

    Natural soundscapes often contain multiple sound sources at any given time. Numerous studies have reported that in human observers, the perception and identification of concurrent sounds is paralleled by specific changes in cortical event-related potentials (ERPs). Although these studies provide a window into the cerebral mechanisms governing sound segregation, little is known about the subcortical neural architecture and hierarchy of neurocomputations that lead to this robust perceptual process. Using computational modeling, scalp-recorded brainstem/cortical ERPs, and human psychophysics, we demonstrate that a primary cue for sound segregation, i.e., harmonicity, is encoded at the auditory nerve level within tens of milliseconds after the onset of sound and is maintained, largely untransformed, in phase-locked activity of the rostral brainstem. As then indexed by auditory cortical responses, (in)harmonicity is coded in the signature and magnitude of the cortical object-related negativity (ORN) response (150-200 ms). The salience of the resulting percept is then captured in a discrete, categorical-like coding scheme by a late negativity response (N5; ~500 ms latency), just prior to the elicitation of a behavioral judgment. Subcortical activity correlated with cortical evoked responses such that weaker phase-locked brainstem responses (lower neural harmonicity) generated larger ORN amplitude, reflecting the cortical registration of multiple sound objects. Studying multiple brain indices simultaneously helps illuminate the mechanisms and time-course of neural processing underlying concurrent sound segregation and may lead to further development and refinement of physiologically driven models of auditory scene analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Lifespan differences in nonlinear dynamics during rest and auditory oddball performance.

    PubMed

    Müller, Viktor; Lindenberger, Ulman

    2012-07-01

    Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an indicator of cortical reactivity. During rest, both nonlinear coupling and spectral alpha power decreased with age, whereas dimensional complexity increased. In contrast, when attending to the deviant stimulus, nonlinear coupling increased with age, and complexity decreased. Correlational analyses showed that nonlinear measures assessed during auditory oddball performance were reliably related to an independently assessed measure of perceptual speed. We conclude that cortical dynamics during rest and stimulus processing undergo substantial reorganization from childhood to old age, and propose that lifespan age differences in nonlinear dynamics during stimulus processing reflect lifespan changes in the functional organization of neuronal cell assemblies. © 2012 Blackwell Publishing Ltd.

  16. Visual cortex entrains to sign language.

    PubMed

    Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel

    2017-06-13

    Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at [Formula: see text]1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.

  17. Auditory perception vs. recognition: representation of complex communication sounds in the mouse auditory cortical fields.

    PubMed

    Geissler, Diana B; Ehret, Günter

    2004-02-01

    Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.

  18. Behavioral detection of intra-cortical microstimulation in the primary and secondary auditory cortex of cats

    PubMed Central

    Zhao, Zhenling; Liu, Yongchun; Ma, Lanlan; Sato, Yu; Qin, Ling

    2015-01-01

    Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS) delivered in the primary and secondary auditory fields (A1 and A2, respectively). After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal’s behavioral decision process and had an implication for the development of cortical auditory prosthetics. PMID:25964744

  19. Behavioral detection of intra-cortical microstimulation in the primary and secondary auditory cortex of cats.

    PubMed

    Zhao, Zhenling; Liu, Yongchun; Ma, Lanlan; Sato, Yu; Qin, Ling

    2015-01-01

    Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS) delivered in the primary and secondary auditory fields (A1 and A2, respectively). After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal's behavioral decision process and had an implication for the development of cortical auditory prosthetics.

  20. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    PubMed Central

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  1. Long-range synchrony of gamma oscillations and auditory hallucination symptoms in schizophrenia

    PubMed Central

    Mulert, C.; Kirsch; Pascual-Marqui, Roberto; McCarley, Robert W.; Spencer, Kevin M.

    2010-01-01

    Phase locking in the gamma-band range has been shown to be diminished in patients with schizophrenia. Moreover, there have been reports of positive correlations between phase locking in the gamma-band range and positive symptoms, especially hallucinations. The aim of the present study was to use a new methodological approach in order to investigate gamma-band phase synchronization between the left and right auditory cortex in patients with schizophrenia and its relationship to auditory hallucinations. Subjects were 18 patients with chronic schizophrenia (SZ) and 16 healthy control (HC) subjects. Auditory hallucination symptom scores were obtained using the Scale for the Assessment of Positive Symptoms. Stimuli were 40-Hz binaural click trains. The generators of the 40 Hz-ASSR were localized using eLORETA and based on the computed intracranial signals lagged interhemispheric phase locking between primary and secondary auditory cortices was analyzed. Current source density of the 40 ASSR response was significantly diminished in SZ in comparison to HC in the right superior and middle temporal gyrus (p<0.05). Interhemispheric phase locking was reduced in SZ in comparison to HC for the primary auditory cortices (p<0.05) but not in the secondary auditory cortices. A significant positive correlation was found between auditory hallucination symptom scores and phase synchronization between the primary auditory cortices (p<0.05, corrected for multiple testing) but not for the secondary auditory cortices. These results suggest that long-range synchrony of gamma oscillations is disturbed in schizophrenia and that this deficit is related to clinical symptoms such as auditory hallucinations. PMID:20713096

  2. Effect of transcranial direct current stimulation (tDCS) on MMN-indexed auditory discrimination: a pilot study.

    PubMed

    Impey, Danielle; Knott, Verner

    2015-08-01

    Membrane potentials and brain plasticity are basic modes of cerebral information processing. Both can be externally (non-invasively) modulated by weak transcranial direct current stimulation (tDCS). Polarity-dependent tDCS-induced reversible circumscribed increases and decreases in cortical excitability and functional changes have been observed following stimulation of motor and visual cortices but relatively little research has been conducted with respect to the auditory cortex. The aim of this pilot study was to examine the effects of tDCS on auditory sensory discrimination in healthy participants (N = 12) assessed with the mismatch negativity (MMN) brain event-related potential (ERP). In a randomized, double-blind, sham-controlled design, participants received anodal tDCS over the primary auditory cortex (2 mA for 20 min) in one session and 'sham' stimulation (i.e., no stimulation except initial ramp-up for 30 s) in the other session. MMN elicited by changes in auditory pitch was found to be enhanced after receiving anodal tDCS compared to 'sham' stimulation, with the effects being evidenced in individuals with relatively reduced (vs. increased) baseline amplitudes and with relatively small (vs. large) pitch deviants. Additional studies are needed to further explore relationships between tDCS-related parameters, auditory stimulus features and individual differences prior to assessing the utility of this tool for treating auditory processing deficits in psychiatric and/or neurological disorders.

  3. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  4. Speech sound discrimination training improves auditory cortex responses in a rat model of autism

    PubMed Central

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Kilgard, Michael P.

    2014-01-01

    Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA) increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field (AAF) responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes. PMID:25140133

  5. Temporal tuning in the bat auditory cortex is sharper when studied with natural echolocation sequences.

    PubMed

    Beetz, M Jerome; Hechavarría, Julio C; Kössl, Manfred

    2016-06-30

    Precise temporal coding is necessary for proper acoustic analysis. However, at cortical level, forward suppression appears to limit the ability of neurons to extract temporal information from natural sound sequences. Here we studied how temporal processing can be maintained in the bats' cortex in the presence of suppression evoked by natural echolocation streams that are relevant to the bats' behavior. We show that cortical neurons tuned to target-distance actually profit from forward suppression induced by natural echolocation sequences. These neurons can more precisely extract target distance information when they are stimulated with natural echolocation sequences than during stimulation with isolated call-echo pairs. We conclude that forward suppression does for time domain tuning what lateral inhibition does for selectivity forms such as auditory frequency tuning and visual orientation tuning. When talking about cortical processing, suppression should be seen as a mechanistic tool rather than a limiting element.

  6. High-Field Functional Imaging of Pitch Processing in Auditory Cortex of the Cat

    PubMed Central

    Butler, Blake E.; Hall, Amee J.; Lomber, Stephen G.

    2015-01-01

    The perception of pitch is a widely studied and hotly debated topic in human hearing. Many of these studies combine functional imaging techniques with stimuli designed to disambiguate the percept of pitch from frequency information present in the stimulus. While useful in identifying potential “pitch centres” in cortex, the existence of truly pitch-responsive neurons requires single neuron-level measures that can only be undertaken in animal models. While a number of animals have been shown to be sensitive to pitch, few studies have addressed the location of cortical generators of pitch percepts in non-human models. The current study uses high-field functional magnetic resonance imaging (fMRI) of the feline brain in an attempt to identify regions of cortex that show increased activity in response to pitch-evoking stimuli. Cats were presented with iterated rippled noise (IRN) stimuli, narrowband noise stimuli with the same spectral profile but no perceivable pitch, and a processed IRN stimulus in which phase components were randomized to preserve slowly changing modulations in the absence of pitch (IRNo). Pitch-related activity was not observed to occur in either primary auditory cortex (A1) or the anterior auditory field (AAF) which comprise the core auditory cortex in cats. Rather, cortical areas surrounding the posterior ectosylvian sulcus responded preferentially to the IRN stimulus when compared to narrowband noise, with group analyses revealing bilateral activity centred in the posterior auditory field (PAF). This study demonstrates that fMRI is useful for identifying pitch-related processing in cat cortex, and identifies cortical areas that warrant further investigation. Moreover, we have taken the first steps in identifying a useful animal model for the study of pitch perception. PMID:26225563

  7. Interactions between the nucleus accumbens and auditory cortices predict music reward value.

    PubMed

    Salimpoor, Valorie N; van den Bosch, Iris; Kovacevic, Natasa; McIntosh, Anthony Randal; Dagher, Alain; Zatorre, Robert J

    2013-04-12

    We used functional magnetic resonance imaging to investigate neural processes when music gains reward value the first time it is heard. The degree of activity in the mesolimbic striatal regions, especially the nucleus accumbens, during music listening was the best predictor of the amount listeners were willing to spend on previously unheard music in an auction paradigm. Importantly, the auditory cortices, amygdala, and ventromedial prefrontal regions showed increased activity during listening conditions requiring valuation, but did not predict reward value, which was instead predicted by increasing functional connectivity of these regions with the nucleus accumbens as the reward value increased. Thus, aesthetic rewards arise from the interaction between mesolimbic reward circuitry and cortical networks involved in perceptual analysis and valuation.

  8. Cortical Memory Mechanisms and Language Origins

    ERIC Educational Resources Information Center

    Aboitiz, Francisco; Garcia, Ricardo R.; Bosman, Conrado; Brunetti, Enzo

    2006-01-01

    We have previously proposed that cortical auditory-vocal networks of the monkey brain can be partly homologized with language networks that participate in the phonological loop. In this paper, we suggest that other linguistic phenomena like semantic and syntactic processing also rely on the activation of transient memory networks, which can be…

  9. Neural bases of rhythmic entrainment in humans: critical transformation between cortical and lower-level representations of auditory rhythm.

    PubMed

    Nozaradan, Sylvie; Schönwiesner, Marc; Keller, Peter E; Lenc, Tomas; Lehmann, Alexandre

    2018-02-01

    The spontaneous ability to entrain to meter periodicities is central to music perception and production across cultures. There is increasing evidence that this ability involves selective neural responses to meter-related frequencies. This phenomenon has been observed in the human auditory cortex, yet it could be the product of evolutionarily older lower-level properties of brainstem auditory neurons, as suggested by recent recordings from rodent midbrain. We addressed this question by taking advantage of a new method to simultaneously record human EEG activity originating from cortical and lower-level sources, in the form of slow (< 20 Hz) and fast (> 150 Hz) responses to auditory rhythms. Cortical responses showed increased amplitudes at meter-related frequencies compared to meter-unrelated frequencies, regardless of the prominence of the meter-related frequencies in the modulation spectrum of the rhythmic inputs. In contrast, frequency-following responses showed increased amplitudes at meter-related frequencies only in rhythms with prominent meter-related frequencies in the input but not for a more complex rhythm requiring more endogenous generation of the meter. This interaction with rhythm complexity suggests that the selective enhancement of meter-related frequencies does not fully rely on subcortical auditory properties, but is critically shaped at the cortical level, possibly through functional connections between the auditory cortex and other, movement-related, brain structures. This process of temporal selection would thus enable endogenous and motor entrainment to emerge with substantial flexibility and invariance with respect to the rhythmic input in humans in contrast with non-human animals. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Spectrotemporal dynamics of auditory cortical synaptic receptive field plasticity.

    PubMed

    Froemke, Robert C; Martins, Ana Raquel O

    2011-09-01

    The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Spectrotemporal Dynamics of Auditory Cortical Synaptic Receptive Field Plasticity

    PubMed Central

    Froemke, Robert C.; Martins, Ana Raquel O.

    2011-01-01

    The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. PMID:21426927

  12. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey

    PubMed Central

    Scott, Brian H.; Leccese, Paul A.; Saleem, Kadharbatcha S.; Kikuchi, Yukiko; Mullarkey, Matthew P.; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C.

    2017-01-01

    Abstract In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. PMID:26620266

  13. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  14. Population responses in primary auditory cortex simultaneously represent the temporal envelope and periodicity features in natural speech.

    PubMed

    Abrams, Daniel A; Nicol, Trent; White-Schwoch, Travis; Zecker, Steven; Kraus, Nina

    2017-05-01

    Speech perception relies on a listener's ability to simultaneously resolve multiple temporal features in the speech signal. Little is known regarding neural mechanisms that enable the simultaneous coding of concurrent temporal features in speech. Here we show that two categories of temporal features in speech, the low-frequency speech envelope and periodicity cues, are processed by distinct neural mechanisms within the same population of cortical neurons. We measured population activity in primary auditory cortex of anesthetized guinea pig in response to three variants of a naturally produced sentence. Results show that the envelope of population responses closely tracks the speech envelope, and this cortical activity more closely reflects wider bandwidths of the speech envelope compared to narrow bands. Additionally, neuronal populations represent the fundamental frequency of speech robustly with phase-locked responses. Importantly, these two temporal features of speech are simultaneously observed within neuronal ensembles in auditory cortex in response to clear, conversation, and compressed speech exemplars. Results show that auditory cortical neurons are adept at simultaneously resolving multiple temporal features in extended speech sentences using discrete coding mechanisms. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Temporal lobe networks supporting the comprehension of spoken words.

    PubMed

    Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius

    2017-09-01

    Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Auditory and visual connectivity gradients in frontoparietal cortex

    PubMed Central

    Hellyer, Peter J.; Wise, Richard J. S.; Leech, Robert

    2016-01-01

    Abstract A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal–ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior–anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top–down modulation of modality‐specific information to occur within higher‐order cortex. This could provide a potentially faster and more efficient pathway by which top–down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long‐range connections to sensory cortices. Hum Brain Mapp 38:255–270, 2017. © 2016 Wiley Periodicals, Inc. PMID:27571304

  17. Spatio-temporal source cluster analysis reveals fronto-temporal auditory change processing differences within a shared autistic and schizotypal trait phenotype.

    PubMed

    Ford, Talitha C; Woods, Will; Crewther, David P

    2017-01-01

    Social Disorganisation (SD) is a shared autistic and schizotypal phenotype that is present in the subclinical population. Auditory processing deficits, particularly in mismatch negativity/field (MMN/F) have been reported across both spectrum disorders. This study investigates differences in MMN/F cortical spatio-temporal source activity between higher and lower quintiles of the SD spectrum. Sixteen low (9 female) and 19 high (9 female) SD subclinical adults (18-40years) underwent magnetoencephalography (MEG) during an MMF paradigm where standard tones (50ms) were interrupted by infrequent duration deviants (100ms). Spatio-temporal source cluster analysis with permutation testing revealed no difference between the groups in source activation to the standard tone. To the deviant tone however, there was significantly reduced right hemisphere fronto-temporal and insular cortex activation for the high SD group ( p = 0.038). The MMF, as a product of the cortical response to the deviant minus that to the standard, did not differ significantly between the high and low Social Disorganisation groups. These data demonstrate a deficit in right fronto-temporal processing of an auditory change for those with more of the shared SD phenotype, indicating that right fronto-temporal auditory processing may be associated with psychosocial functioning.

  18. Cortical response variability as a developmental index of selective auditory attention

    PubMed Central

    Strait, Dana L.; Slater, Jessica; Abecassis, Victor; Kraus, Nina

    2014-01-01

    Attention induces synchronicity in neuronal firing for the encoding of a given stimulus at the exclusion of others. Recently, we reported decreased variability in scalp-recorded cortical evoked potentials to attended compared with ignored speech in adults. Here we aimed to determine the developmental time course for this neural index of auditory attention. We compared cortical auditory-evoked variability with attention across three age groups: preschoolers, school-aged children and young adults. Results reveal an increased impact of selective auditory attention on cortical response variability with development. Although all three age groups have equivalent response variability to attended speech, only school-aged children and adults have a distinction between attend and ignore conditions. Preschoolers, on the other hand, demonstrate no impact of attention on cortical responses, which we argue reflects the gradual emergence of attention within this age range. Outcomes are interpreted in the context of the behavioral relevance of cortical response variability and its potential to serve as a developmental index of cognitive skill. PMID:24267508

  19. Coupling between Theta Oscillations and Cognitive Control Network during Cross-Modal Visual and Auditory Attention: Supramodal vs Modality-Specific Mechanisms.

    PubMed

    Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T

    2016-01-01

    Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.

  20. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    PubMed

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  1. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    PubMed

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  2. Functional abnormalities in the cortical processing of sound complexity and musical consonance in schizophrenia: evidence from an evoked potential study

    PubMed Central

    2013-01-01

    Background Previous studies have demonstrated functional and structural temporal lobe abnormalities located close to the auditory cortical regions in schizophrenia. The goal of this study was to determine whether functional abnormalities exist in the cortical processing of musical sound in schizophrenia. Methods Twelve schizophrenic patients and twelve age- and sex-matched healthy controls were recruited, and participants listened to a random sequence of two kinds of sonic entities, intervals (tritones and perfect fifths) and chords (atonal chords, diminished chords, and major triads), of varying degrees of complexity and consonance. The perception of musical sound was investigated by the auditory evoked potentials technique. Results Our results showed that schizophrenic patients exhibited significant reductions in the amplitudes of the N1 and P2 components elicited by musical stimuli, to which consonant sounds contributed more significantly than dissonant sounds. Schizophrenic patients could not perceive the dissimilarity between interval and chord stimuli based on the evoked potentials responses as compared with the healthy controls. Conclusion This study provided electrophysiological evidence of functional abnormalities in the cortical processing of sound complexity and music consonance in schizophrenia. The preliminary findings warrant further investigations for the underlying mechanisms. PMID:23721126

  3. Mismatch Negativity in Recent-Onset and Chronic Schizophrenia: A Current Source Density Analysis

    PubMed Central

    Fulham, W. Ross; Michie, Patricia T.; Ward, Philip B.; Rasser, Paul E.; Todd, Juanita; Johnston, Patrick J.; Thompson, Paul M.; Schall, Ulrich

    2014-01-01

    Mismatch negativity (MMN) is a component of the event-related potential elicited by deviant auditory stimuli. It is presumed to index pre-attentive monitoring of changes in the auditory environment. MMN amplitude is smaller in groups of individuals with schizophrenia compared to healthy controls. We compared duration-deviant MMN in 16 recent-onset and 19 chronic schizophrenia patients versus age- and sex-matched controls. Reduced frontal MMN was found in both patient groups, involved reduced hemispheric asymmetry, and was correlated with Global Assessment of Functioning (GAF) and negative symptom ratings. A cortically-constrained LORETA analysis, incorporating anatomical data from each individual's MRI, was performed to generate a current source density model of the MMN response over time. This model suggested MMN generation within a temporal, parietal and frontal network, which was right hemisphere dominant only in controls. An exploratory analysis revealed reduced CSD in patients in superior and middle temporal cortex, inferior and superior parietal cortex, precuneus, anterior cingulate, and superior and middle frontal cortex. A region of interest (ROI) analysis was performed. For the early phase of the MMN, patients had reduced bilateral temporal and parietal response and no lateralisation in frontal ROIs. For late MMN, patients had reduced bilateral parietal response and no lateralisation in temporal ROIs. In patients, correlations revealed a link between GAF and the MMN response in parietal cortex. In controls, the frontal response onset was 17 ms later than the temporal and parietal response. In patients, onset latency of the MMN response was delayed in secondary, but not primary, auditory cortex. However amplitude reductions were observed in both primary and secondary auditory cortex. These latency delays may indicate relatively intact information processing upstream of the primary auditory cortex, but impaired primary auditory cortex or cortico-cortical or thalamo-cortical communication with higher auditory cortices as a core deficit in schizophrenia. PMID:24949859

  4. Effects of Background Noise on Cortical Encoding of Speech in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Russo, Nicole; Zecker, Steven; Trommer, Barbara; Chen, Julia; Kraus, Nina

    2009-01-01

    This study provides new evidence of deficient auditory cortical processing of speech in noise in autism spectrum disorders (ASD). Speech-evoked responses (approximately 100-300 ms) in quiet and background noise were evaluated in typically-developing (TD) children and children with ASD. ASD responses showed delayed timing (both conditions) and…

  5. Representation of Sound Categories in Auditory Cortical Maps

    ERIC Educational Resources Information Center

    Guenther, Frank H.; Nieto-Castanon, Alfonso; Ghosh, Satrajit S.; Tourville, Jason A.

    2004-01-01

    Functional magnetic resonance imaging (fMRI) was used to investigate the representation of sound categories in human auditory cortex. Experiment 1 investigated the representation of prototypical (good) and nonprototypical (bad) examples of a vowel sound. Listening to prototypical examples of a vowel resulted in less auditory cortical activation…

  6. Auditory Spatial Attention Representations in the Human Cerebral Cortex

    PubMed Central

    Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.

    2014-01-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  7. Preservation of Auditory P300-Like Potentials in Cortical Deafness

    PubMed Central

    Cavinato, Marianna; Rigon, Jessica; Volpato, Chiara; Semenza, Carlo; Piccione, Francesco

    2012-01-01

    The phenomenon of blindsight has been largely studied and refers to residual abilities of blind patients without an acknowledged visual awareness. Similarly, “deaf hearing” might represent a further example of dissociation between detection and perception of sounds. Here we report the rare case of a patient with a persistent and complete cortical deafness caused by damage to the bilateral temporo-parietal lobes who occasionally showed unexpected reactions to environmental sounds despite she denied hearing. We applied for the first time electrophysiological techniques to better understand auditory processing and perceptual awareness of the patient. While auditory brainstem responses were within normal limits, no middle- and long-latency waveforms could be identified. However, event-related potentials showed conflicting results. While the Mismatch Negativity could not be evoked, robust P3-like waveforms were surprisingly found in the latency range of 600–700 ms. The generation of P3-like potentials, despite extensive destruction of the auditory cortex, might imply the integrity of independent circuits necessary to process auditory stimuli even in the absence of consciousness of sound. Our results support the reverse hierarchy theory that asserts that the higher levels of the hierarchy are immediately available for perception, while low-level information requires more specific conditions. The accurate characterization in terms of anatomy and neurophysiology of the auditory lesions might facilitate understanding of the neural substrates involved in deaf-hearing. PMID:22272260

  8. Cortico-Cortical Connectivity Within Ferret Auditory Cortex.

    PubMed

    Bizley, Jennifer K; Bajo, Victoria M; Nodal, Fernando R; King, Andrew J

    2015-10-15

    Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency-matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non-overlapping, consistent with the non-tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. © 2015 Wiley Periodicals, Inc.

  9. Stereotactically-guided Ablation of the Rat Auditory Cortex, and Localization of the Lesion in the Brain.

    PubMed

    Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A

    2017-10-11

    The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.

  10. [In Process Citation

    PubMed

    Ackermann; Mathiak

    1999-11-01

    Pure word deafness (auditory verbal agnosia) is characterized by an impairment of auditory comprehension, repetition of verbal material and writing to dictation whereas spontaneous speech production and reading largely remain unaffected. Sometimes, this syndrome is preceded by complete deafness (cortical deafness) of varying duration. Perception of vowels and suprasegmental features of verbal utterances (e.g., intonation contours) seems to be less disrupted than the processing of consonants and, therefore, might mediate residual auditory functions. Often, lip reading and/or slowing of speaking rate allow within some limits to compensate for speech comprehension deficits. Apart from a few exceptions, the available reports of pure word deafness documented a bilateral temporal lesion. In these instances, as a rule, identification of nonverbal (environmental) sounds, perception of music, temporal resolution of sequential auditory cues and/or spatial localization of acoustic events were compromised as well. The observed variable constellation of auditory signs and symptoms in central hearing disorders following bilateral temporal disorders, most probably, reflects the multitude of functional maps at the level of the auditory cortices subserving, as documented in a variety of non-human species, the encoding of specific stimulus parameters each. Thus, verbal/nonverbal auditory agnosia may be considered a paradigm of distorted "auditory scene analysis" (Bregman 1990) affecting both primitive and schema-based perceptual processes. It cannot be excluded, however, that disconnection of the Wernicke-area from auditory input (Geschwind 1965) and/or an impairment of suggested "phonetic module" (Liberman 1996) contribute to the observed deficits as well. Conceivably, these latter mechanisms underly the rare cases of pure word deafness following a lesion restricted to the dominant hemisphere. Only few instances of a rather isolated disruption of the discrimination/identification of nonverbal sound sources, in the presence of uncompromised speech comprehension, have been reported so far (nonverbal auditory agnosia). As a rule, unilateral right-sided damage has been found to be the relevant lesion.

  11. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey.

    PubMed

    Scott, Brian H; Leccese, Paul A; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Mullarkey, Matthew P; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C

    2017-01-01

    In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  12. Cholecystokinin from the entorhinal cortex enables neural plasticity in the auditory cortex

    PubMed Central

    Li, Xiao; Yu, Kai; Zhang, Zicong; Sun, Wenjian; Yang, Zhou; Feng, Jingyu; Chen, Xi; Liu, Chun-Hua; Wang, Haitao; Guo, Yi Ping; He, Jufang

    2014-01-01

    Patients with damage to the medial temporal lobe show deficits in forming new declarative memories but can still recall older memories, suggesting that the medial temporal lobe is necessary for encoding memories in the neocortex. Here, we found that cortical projection neurons in the perirhinal and entorhinal cortices were mostly immunopositive for cholecystokinin (CCK). Local infusion of CCK in the auditory cortex of anesthetized rats induced plastic changes that enabled cortical neurons to potentiate their responses or to start responding to an auditory stimulus that was paired with a tone that robustly triggered action potentials. CCK infusion also enabled auditory neurons to start responding to a light stimulus that was paired with a noise burst. In vivo intracellular recordings in the auditory cortex showed that synaptic strength was potentiated after two pairings of presynaptic and postsynaptic activity in the presence of CCK. Infusion of a CCKB antagonist in the auditory cortex prevented the formation of a visuo-auditory association in awake rats. Finally, activation of the entorhinal cortex potentiated neuronal responses in the auditory cortex, which was suppressed by infusion of a CCKB antagonist. Together, these findings suggest that the medial temporal lobe influences neocortical plasticity via CCK-positive cortical projection neurons in the entorhinal cortex. PMID:24343575

  13. Functional mapping of the primate auditory system.

    PubMed

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  14. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps

    PubMed Central

    2016-01-01

    Abstract Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor‐preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface‐based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory‐motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory‐motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M‐I. Hum Brain Mapp 37:2784–2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27061771

  15. Developmental timeframes for induction of microgyria and rapid auditory processing deficits in the rat.

    PubMed

    Threlkeld, Steven W; McClure, Melissa M; Rosen, Glenn D; Fitch, R Holly

    2006-09-13

    Induction of a focal freeze lesion to the skullcap of a 1-day-old rat pup leads to the formation of microgyria similar to those identified postmortem in human dyslexics. Rats with microgyria exhibit rapid auditory processing deficits similar to those seen in language-impaired (LI) children, and infants at risk for LI and these effects are particularly marked in juvenile as compared to adult subjects. In the current study, a startle response paradigm was used to investigate gap detection in juvenile and adult rats that received bilateral freezing lesions or sham surgery on postnatal day (P) 1, 3 or 5. Microgyria were confirmed in P1 and 3 lesion rats, but not in the P5 lesion group. We found a significant reduction in brain weight and neocortical volume in P1 and 3 lesioned brains relative to shams. Juvenile (P27-39) behavioral data indicated significant rapid auditory processing deficits in all three lesion groups as compared to sham subjects, while adult (P60+) data revealed a persistent disparity only between P1-lesioned rats and shams. Combined results suggest that generalized pathology affecting neocortical development is responsible for the presence of rapid auditory processing deficits, rather than factors specific to the formation of microgyria per se. Finally, results show that the window for the induction of rapid auditory processing deficits through disruption of neurodevelopment appears to extend beyond the endpoint for cortical neuronal migration, although, the persistent deficits exhibited by P1 lesion subjects suggest a secondary neurodevelopmental window at the time of cortical neuromigration representing a peak period of vulnerability.

  16. Can rhythmic auditory cuing remediate language-related deficits in Parkinson's disease?

    PubMed

    Kotz, Sonja A; Gunter, Thomas C

    2015-03-01

    Neurodegenerative changes of the basal ganglia in idiopathic Parkinson's disease (IPD) lead to motor deficits as well as general cognitive decline. Given these impairments, the question arises as to whether motor and nonmotor deficits can be ameliorated similarly. We reason that a domain-general sensorimotor circuit involved in temporal processing may support the remediation of such deficits. Following findings that auditory cuing benefits gait kinematics, we explored whether reported language-processing deficits in IPD can also be remediated via auditory cuing. During continuous EEG measurement, an individual diagnosed with IPD heard two types of temporally predictable but metrically different auditory beat-based cues: a march, which metrically aligned with the speech accent structure, a waltz that did not metrically align, or no cue before listening to naturally spoken sentences that were either grammatically well formed or were semantically or syntactically incorrect. Results confirmed that only the cuing with a march led to improved computation of syntactic and semantic information. We infer that a marching rhythm may lead to a stronger engagement of the cerebello-thalamo-cortical circuit that compensates dysfunctional striato-cortical timing. Reinforcing temporal realignment, in turn, may lead to the timely processing of linguistic information embedded in the temporally variable speech signal. © 2014 New York Academy of Sciences.

  17. Mouse auditory cortex differs from visual and somatosensory cortices in the laminar distribution of cytochrome oxidase and acetylcholinesterase.

    PubMed

    Anderson, L A; Christianson, G B; Linden, J F

    2009-02-03

    Cytochrome oxidase (CYO) and acetylcholinesterase (AChE) staining density varies across the cortical layers in many sensory areas. The laminar variations likely reflect differences between the layers in levels of metabolic activity and cholinergic modulation. The question of whether these laminar variations differ between primary sensory cortices has never been systematically addressed in the same set of animals, since most studies of sensory cortex focus on a single sensory modality. Here, we compared the laminar distribution of CYO and AChE activity in the primary auditory, visual, and somatosensory cortices of the mouse, using Nissl-stained sections to define laminar boundaries. Interestingly, for both CYO and AChE, laminar patterns of enzyme activity were similar in the visual and somatosensory cortices, but differed in the auditory cortex. In the visual and somatosensory areas, staining densities for both enzymes were highest in layers III/IV or IV and in lower layer V. In the auditory cortex, CYO activity showed a reliable peak only at the layer III/IV border, while AChE distribution was relatively homogeneous across layers. These results suggest that laminar patterns of metabolic activity and cholinergic influence are similar in the mouse visual and somatosensory cortices, but differ in the auditory cortex.

  18. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    PubMed Central

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  19. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  20. Background noise exerts diverse effects on the cortical encoding of foreground sounds.

    PubMed

    Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E

    2017-08-01

    In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may contribute to robust signal representation and discrimination in acoustic environments with prominent background noise. Copyright © 2017 the American Physiological Society.

  1. Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex.

    PubMed

    Greenlee, Jeremy D W; Behroozmand, Roozbeh; Larson, Charles R; Jackson, Adam W; Chen, Fangxiang; Hansen, Daniel R; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.

  2. Sensory-Motor Interactions for Vocal Pitch Monitoring in Non-Primary Human Auditory Cortex

    PubMed Central

    Larson, Charles R.; Jackson, Adam W.; Chen, Fangxiang; Hansen, Daniel R.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control. PMID:23577157

  3. Deficits in auditory processing contribute to impairments in vocal affect recognition in autism spectrum disorders: A MEG study.

    PubMed

    Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David

    2015-11-01

    The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).

  4. Cortical Auditory Evoked Potentials to Evaluate Cochlear Implant Candidacy in an Ear With Long-standing Hearing Loss: A Case Report.

    PubMed

    Patel, Tirth R; Shahin, Antoine J; Bhat, Jyoti; Welling, D Bradley; Moberly, Aaron C

    2016-10-01

    We describe a novel use of cortical auditory evoked potentials in the preoperative workup to determine ear candidacy for cochlear implantation. A 71-year-old male was evaluated who had a long-deafened right ear, had never worn a hearing aid in that ear, and relied heavily on use of a left-sided hearing aid. Electroencephalographic testing was performed using free field auditory stimulation of each ear independently with pure tones at 1000 and 2000 Hz at approximately 10 dB above pure-tone thresholds for each frequency and for each ear. Mature cortical potentials were identified through auditory stimulation of the long-deafened ear. The patient underwent successful implantation of that ear. He experienced progressively improving aided pure-tone thresholds and binaural speech recognition benefit (AzBio score of 74%). Findings suggest that use of cortical auditory evoked potentials may serve a preoperative role in ear selection prior to cochlear implantation. © The Author(s) 2016.

  5. Cortical processing of speech in individuals with auditory neuropathy spectrum disorder.

    PubMed

    Apeksha, Kumari; Kumar, U Ajith

    2018-06-01

    Auditory neuropathy spectrum disorder (ANSD) is a condition where cochlear amplification function (involving outer hair cells) is normal but neural conduction in the auditory pathway is disordered. This study was done to investigate the cortical representation of speech in individuals with ANSD and to compare it with the individuals with normal hearing. Forty-five participants including 21 individuals with ANSD and 24 individuals with normal hearing were considered for the study. Individuals with ANSD had hearing thresholds ranging from normal hearing to moderate hearing loss. Auditory cortical evoked potentials-through odd ball paradigm-were recorded using 64 electrodes placed on the scalp for /ba/-/da/ stimulus. Onset cortical responses were also recorded in repetitive paradigm using /da/ stimuli. Sensitivity and reaction time required to identify the oddball stimuli were also obtained. Behavioural results indicated that individuals in ANSD group had significantly lower sensitivity and longer reaction times compared to individuals with normal hearing sensitivity. Reliable P300 could be elicited in both the groups. However, a significant difference in scalp topographies was observed between the two groups in both repetitive and oddball paradigms. Source localization using local auto regressive analyses revealed that activations were more diffuses in individuals with ANSD when compared to individuals with normal hearing sensitivity. Results indicated that the brain networks and regions activated in individuals with ANSD during detection and discrimination of speech sounds are different from normal hearing individuals. In general, normal hearing individuals showed more focused activations while in individuals with ANSD activations were diffused.

  6. Cortical Auditory Evoked Potentials with Simple (Tone Burst) and Complex (Speech) Stimuli in Children with Cochlear Implant

    PubMed Central

    Martins, Kelly Vasconcelos Chaves; Gil, Daniela

    2017-01-01

    Introduction  The registry of the component P1 of the cortical auditory evoked potential has been widely used to analyze the behavior of auditory pathways in response to cochlear implant stimulation. Objective  To determine the influence of aural rehabilitation in the parameters of latency and amplitude of the P1 cortical auditory evoked potential component elicited by simple auditory stimuli (tone burst) and complex stimuli (speech) in children with cochlear implants. Method  The study included six individuals of both genders aged 5 to 10 years old who have been cochlear implant users for at least 12 months, and who attended auditory rehabilitation with an aural rehabilitation therapy approach. Participants were submitted to research of the cortical auditory evoked potential at the beginning of the study and after 3 months of aural rehabilitation. To elicit the responses, simple stimuli (tone burst) and complex stimuli (speech) were used and presented in free field at 70 dB HL. The results were statistically analyzed, and both evaluations were compared. Results  There was no significant difference between the type of eliciting stimulus of the cortical auditory evoked potential for the latency and the amplitude of P1. There was a statistically significant difference in the P1 latency between the evaluations for both stimuli, with reduction of the latency in the second evaluation after 3 months of auditory rehabilitation. There was no statistically significant difference regarding the amplitude of P1 under the two types of stimuli or in the two evaluations. Conclusion  A decrease in latency of the P1 component elicited by both simple and complex stimuli was observed within a three-month interval in children with cochlear implant undergoing aural rehabilitation. PMID:29018498

  7. Musical expertise is related to altered functional connectivity during audiovisual integration

    PubMed Central

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo

    2015-01-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305

  8. Cortical processes of speech illusions in the general population.

    PubMed

    Schepers, E; Bodar, L; van Os, J; Lousberg, R

    2016-10-18

    There is evidence that experimentally elicited auditory illusions in the general population index risk for psychotic symptoms. As little is known about underlying cortical mechanisms of auditory illusions, an experiment was conducted to analyze processing of auditory illusions in a general population sample. In a follow-up design with two measurement moments (baseline and 6 months), participants (n = 83) underwent the White Noise task under simultaneous recording with a 14-lead EEG. An auditory illusion was defined as hearing any speech in a sound fragment containing white noise. A total number of 256 speech illusions (SI) were observed over the two measurements, with a high degree of stability of SI over time. There were 7 main effects of speech illusion on the EEG alpha band-the most significant indicating a decrease in activity at T3 (t = -4.05). Other EEG frequency bands (slow beta, fast beta, gamma, delta, theta) showed no significant associations with SI. SIs are characterized by reduced alpha activity in non-clinical populations. Given the association of SIs with psychosis, follow-up research is required to examine the possibility of reduced alpha activity mediating SIs in high risk and symptomatic populations.

  9. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    PubMed

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  10. Human cortical organization for processing vocalizations indicates representation of harmonic structure as a signal attribute

    PubMed Central

    Lewis, James W.; Talkington, William J.; Walker, Nathan A.; Spirou, George A.; Jajosky, Audrey; Frum, Chris

    2009-01-01

    The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g. harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using fMRI we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human non-verbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for putative spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound. PMID:19228981

  11. Knowledge About Sounds—Context-Specific Meaning Differently Activates Cortical Hemispheres, Auditory Cortical Fields, and Layers in House Mice

    PubMed Central

    Geissler, Diana B.; Schmidt, H. Sabine; Ehret, Günter

    2016-01-01

    Activation of the auditory cortex (AC) by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF), the ultrasonic field (UF), the secondary field (AII), and the dorsoposterior field (DP) suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers) and brains which acquired knowledge via implicit learning (naïve females). In this way, auditory cortical activation discriminates between instinctive (mothers) and learned (naïve females) cognition. PMID:27013959

  12. Cortical oscillations and entrainment in speech processing during working memory load.

    PubMed

    Hjortkjaer, Jens; Märcher-Rørsted, Jonatan; Fuglsang, Søren A; Dau, Torsten

    2018-02-02

    Neuronal oscillations are thought to play an important role in working memory (WM) and speech processing. Listening to speech in real-life situations is often cognitively demanding but it is unknown whether WM load influences how auditory cortical activity synchronizes to speech features. Here, we developed an auditory n-back paradigm to investigate cortical entrainment to speech envelope fluctuations under different degrees of WM load. We measured the electroencephalogram, pupil dilations and behavioural performance from 22 subjects listening to continuous speech with an embedded n-back task. The speech stimuli consisted of long spoken number sequences created to match natural speech in terms of sentence intonation, syllabic rate and phonetic content. To burden different WM functions during speech processing, listeners performed an n-back task on the speech sequences in different levels of background noise. Increasing WM load at higher n-back levels was associated with a decrease in posterior alpha power as well as increased pupil dilations. Frontal theta power increased at the start of the trial and increased additionally with higher n-back level. The observed alpha-theta power changes are consistent with visual n-back paradigms suggesting general oscillatory correlates of WM processing load. Speech entrainment was measured as a linear mapping between the envelope of the speech signal and low-frequency cortical activity (< 13 Hz). We found that increases in both types of WM load (background noise and n-back level) decreased cortical speech envelope entrainment. Although entrainment persisted under high load, our results suggest a top-down influence of WM processing on cortical speech entrainment. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Cortical Inhibition Reduces Information Redundancy at Presentation of Communication Sounds in the Primary Auditory Cortex

    PubMed Central

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris

    2013-01-01

    In all sensory modalities, intracortical inhibition shapes the functional properties of cortical neurons but also influences the responses to natural stimuli. Studies performed in various species have revealed that auditory cortex neurons respond to conspecific vocalizations by temporal spike patterns displaying a high trial-to-trial reliability, which might result from precise timing between excitation and inhibition. Studying the guinea pig auditory cortex, we show that partial blockage of GABAA receptors by gabazine (GBZ) application (10 μm, a concentration that promotes expansion of cortical receptive fields) increased the evoked firing rate and the spike-timing reliability during presentation of communication sounds (conspecific and heterospecific vocalizations), whereas GABAB receptor antagonists [10 μm saclofen; 10–50 μm CGP55845 (p-3-aminopropyl-p-diethoxymethyl phosphoric acid)] had nonsignificant effects. Computing mutual information (MI) from the responses to vocalizations using either the evoked firing rate or the temporal spike patterns revealed that GBZ application increased the MI derived from the activity of single cortical site but did not change the MI derived from population activity. In addition, quantification of information redundancy showed that GBZ significantly increased redundancy at the population level. This result suggests that a potential role of intracortical inhibition is to reduce information redundancy during the processing of natural stimuli. PMID:23804094

  14. Musical Expectations Enhance Auditory Cortical Processing in Musicians: A Magnetoencephalography Study.

    PubMed

    Park, Jeong Mi; Chung, Chun Kee; Kim, June Sic; Lee, Kyung Myun; Seol, Jaeho; Yi, Suk Won

    2018-01-15

    The present study investigated the influence of musical expectations on auditory representations in musicians and non-musicians using magnetoencephalography (MEG). Neuroscientific studies have demonstrated that musical syntax is processed in the inferior frontal gyri, eliciting an early right anterior negativity (ERAN), and anatomical evidence has shown that interconnections occur between the frontal cortex and the belt and parabelt regions in the auditory cortex (AC). Therefore, we anticipated that musical expectations would mediate neural activities in the AC via an efferent pathway. To test this hypothesis, we measured the auditory-evoked fields (AEFs) of seven musicians and seven non-musicians while they were listening to a five-chord progression in which the expectancy of the third chord was manipulated (highly expected, less expected, and unexpected). The results revealed that highly expected chords elicited shorter N1m (negative AEF at approximately 100 ms) and P2m (positive AEF at approximately 200 ms) latencies and larger P2m amplitudes in the AC than less-expected and unexpected chords. The relations between P2m amplitudes/latencies and harmonic expectations were similar between the groups; however, musicians' results were more remarkable than those of non-musicians. These findings suggest that auditory cortical processing is enhanced by musical knowledge and long-term training in a top-down manner, which is reflected in shortened N1m and P2m latencies and enhanced P2m amplitudes in the AC. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Alpha power indexes task-related networks on large and small scales: A multimodal ECoG study in humans and a non-human primate.

    PubMed

    de Pesters, A; Coon, W G; Brunner, P; Gunduz, A; Ritaccio, A L; Brunet, N M; de Weerd, P; Roberts, M J; Oostenveld, R; Fries, P; Schalk, G

    2016-07-01

    Performing different tasks, such as generating motor movements or processing sensory input, requires the recruitment of specific networks of neuronal populations. Previous studies suggested that power variations in the alpha band (8-12Hz) may implement such recruitment of task-specific populations by increasing cortical excitability in task-related areas while inhibiting population-level cortical activity in task-unrelated areas (Klimesch et al., 2007; Jensen and Mazaheri, 2010). However, the precise temporal and spatial relationships between the modulatory function implemented by alpha oscillations and population-level cortical activity remained undefined. Furthermore, while several studies suggested that alpha power indexes task-related populations across large and spatially separated cortical areas, it was largely unclear whether alpha power also differentially indexes smaller networks of task-related neuronal populations. Here we addressed these questions by investigating the temporal and spatial relationships of electrocorticographic (ECoG) power modulations in the alpha band and in the broadband gamma range (70-170Hz, indexing population-level activity) during auditory and motor tasks in five human subjects and one macaque monkey. In line with previous research, our results confirm that broadband gamma power accurately tracks task-related behavior and that alpha power decreases in task-related areas. More importantly, they demonstrate that alpha power suppression lags population-level activity in auditory areas during the auditory task, but precedes it in motor areas during the motor task. This suppression of alpha power in task-related areas was accompanied by an increase in areas not related to the task. In addition, we show for the first time that these differential modulations of alpha power could be observed not only across widely distributed systems (e.g., motor vs. auditory system), but also within the auditory system. Specifically, alpha power was suppressed in the locations within the auditory system that most robustly responded to particular sound stimuli. Altogether, our results provide experimental evidence for a mechanism that preferentially recruits task-related neuronal populations by increasing cortical excitability in task-related cortical areas and decreasing cortical excitability in task-unrelated areas. This mechanism is implemented by variations in alpha power and is common to humans and the non-human primate under study. These results contribute to an increasingly refined understanding of the mechanisms underlying the selection of the specific neuronal populations required for task execution. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Dynamic oscillatory processes governing cued orienting and allocation of auditory attention

    PubMed Central

    Ahveninen, Jyrki; Huang, Samantha; Belliveau, John W.; Chang, Wei-Tang; Hämäläinen, Matti

    2013-01-01

    In everyday listening situations, we need to constantly switch between alternative sound sources and engage attention according to cues that match our goals and expectations. The exact neuronal bases of these processes are poorly understood. We investigated oscillatory brain networks controlling auditory attention using cortically constrained fMRI-weighted magnetoencephalography/ electroencephalography (MEG/EEG) source estimates. During consecutive trials, subjects were instructed to shift attention based on a cue, presented in the ear where a target was likely to follow. To promote audiospatial attention effects, the targets were embedded in streams of dichotically presented standard tones. Occasionally, an unexpected novel sound occurred opposite to the cued ear, to trigger involuntary orienting. According to our cortical power correlation analyses, increased frontoparietal/temporal 30–100 Hz gamma activity at 200–1400 ms after cued orienting predicted fast and accurate discrimination of subsequent targets. This sustained correlation effect, possibly reflecting voluntary engagement of attention after the initial cue-driven orienting, spread from the temporoparietal junction, anterior insula, and inferior frontal (IFC) cortices to the right frontal eye fields. Engagement of attention to one ear resulted in a significantly stronger increase of 7.5–15 Hz alpha in the ipsilateral than contralateral parieto-occipital cortices 200–600 ms after the cue onset, possibly reflecting crossmodal modulation of the dorsal visual pathway during audiospatial attention. Comparisons of cortical power patterns also revealed significant increases of sustained right medial frontal cortex theta power, right dorsolateral prefrontal cortex and anterior insula/IFC beta power, and medial parietal cortex and posterior cingulate cortex gamma activity after cued vs. novelty-triggered orienting (600–1400 ms). Our results reveal sustained oscillatory patterns associated with voluntary engagement of auditory spatial attention, with the frontoparietal and temporal gamma increases being best predictors of subsequent behavioral performance. PMID:23915050

  17. How do auditory cortex neurons represent communication sounds?

    PubMed

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Cortical contributions to the auditory frequency-following response revealed by MEG

    PubMed Central

    Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.

    2016-01-01

    The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409

  19. Cortico‐cortical connectivity within ferret auditory cortex

    PubMed Central

    Bajo, Victoria M.; Nodal, Fernando R.; King, Andrew J.

    2015-01-01

    ABSTRACT Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency‐matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non‐overlapping, consistent with the non‐tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. J. Comp. Neurol. 523:2187–2210, 2015. © 2015 Wiley Periodicals, Inc. PMID:25845831

  20. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE PAGES

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.; ...

    2017-06-30

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  1. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  2. Cortical pitch regions in humans respond primarily to resolved harmonics and are located in specific tonotopic regions of anterior auditory cortex.

    PubMed

    Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H

    2013-12-11

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.

  3. Cortical Pitch Regions in Humans Respond Primarily to Resolved Harmonics and Are Located in Specific Tonotopic Regions of Anterior Auditory Cortex

    PubMed Central

    Kanwisher, Nancy; McDermott, Josh H.

    2013-01-01

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce “resolved” peaks of excitation in the cochlea, whereas others are “unresolved,” providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior. PMID:24336712

  4. Long-term exposure to noise impairs cortical sound processing and attention control.

    PubMed

    Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto

    2004-11-01

    Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.

  5. Network and external perturbation induce burst synchronisation in cat cerebral cortex

    NASA Astrophysics Data System (ADS)

    Lameu, Ewandson L.; Borges, Fernando S.; Borges, Rafael R.; Batista, Antonio M.; Baptista, Murilo S.; Viana, Ricardo L.

    2016-05-01

    The brain of mammals are divided into different cortical areas that are anatomically connected forming larger networks which perform cognitive tasks. The cat cerebral cortex is composed of 65 areas organised into the visual, auditory, somatosensory-motor and frontolimbic cognitive regions. We have built a network of networks, in which networks are connected among themselves according to the connections observed in the cat cortical areas aiming to study how inputs drive the synchronous behaviour in this cat brain-like network. We show that without external perturbations it is possible to observe high level of bursting synchronisation between neurons within almost all areas, except for the auditory area. Bursting synchronisation appears between neurons in the auditory region when an external perturbation is applied in another cognitive area. This is a clear evidence that burst synchronisation and collective behaviour in the brain might be a process mediated by other brain areas under stimulation.

  6. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans.

    PubMed

    Cui, Zhuang; Wang, Qian; Gao, Yayue; Wang, Jing; Wang, Mengyang; Teng, Pengfei; Guan, Yuguang; Zhou, Jian; Li, Tianfu; Luan, Guoming; Li, Liang

    2017-01-01

    The arrival of sound signals in the auditory cortex (AC) triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC) and extrinsic functional connectivity (eFC) of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices). Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  7. Automated cortical auditory evoked potentials threshold estimation in neonates.

    PubMed

    Oliveira, Lilian Sanches; Didoné, Dayane Domeneghini; Durante, Alessandra Spada

    2018-02-02

    The evaluation of Cortical Auditory Evoked Potential has been the focus of scientific studies in infants. Some authors have reported that automated response detection is effective in exploring these potentials in infants, but few have reported their efficacy in the search for thresholds. To analyze the latency, amplitude and thresholds of Cortical Auditory Evoked Potential using an automatic response detection device in a neonatal population. This is a cross-sectional, observational study. Cortical Auditory Evoked Potentials were recorded in response to pure-tone stimuli of the frequencies 500, 1000, 2000 and 4000Hz presented in an intensity range between 0 and 80dB HL using a single channel recording. P1 was performed in an exclusively automated fashion, using Hotelling's T 2 statistical test. The latency and amplitude were obtained manually by three examiners. The study comprised 39 neonates up to 28 days old of both sexes with presence of otoacoustic emissions and no risk factors for hearing loss. With the protocol used, Cortical Auditory Evoked Potential responses were detected in all subjects at high intensity and thresholds. The mean thresholds were 24.8±10.4dB NA, 25±9.0dB NA, 28±7.8dB NA and 29.4±6.6dB HL for 500, 1000, 2000 and 4000Hz, respectively. Reliable responses were obtained in the assessment of cortical auditory potentials in the neonates assessed with a device for automatic response detection. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  8. Plasticity in neuromagnetic cortical responses suggests enhanced auditory object representation

    PubMed Central

    2013-01-01

    Background Auditory perceptual learning persistently modifies neural networks in the central nervous system. Central auditory processing comprises a hierarchy of sound analysis and integration, which transforms an acoustical signal into a meaningful object for perception. Based on latencies and source locations of auditory evoked responses, we investigated which stage of central processing undergoes neuroplastic changes when gaining auditory experience during passive listening and active perceptual training. Young healthy volunteers participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable ‘ba’, which is an unusual speech sound to English listeners. Magnetoencephalographic (MEG) brain responses were recorded during two pre-training and one post-training sessions. Underlying cortical sources were localized, and the temporal dynamics of auditory evoked responses were analyzed. Results After both passive listening and active training, the amplitude of the P2m wave with latency of 200 ms increased considerably. By this latency, the integration of stimulus features into an auditory object for further conscious perception is considered to be complete. Therefore the P2m changes were discussed in the light of auditory object representation. Moreover, P2m sources were localized in anterior auditory association cortex, which is part of the antero-ventral pathway for object identification. The amplitude of the earlier N1m wave, which is related to processing of sensory information, did not change over the time course of the study. Conclusion The P2m amplitude increase and its persistence over time constitute a neuroplastic change. The P2m gain likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time. Different trajectories of brain and behaviour changes suggest that the preceding effect of a P2m increase relates to brain processes, which are necessary precursors of perceptual learning. Cautious discussion is required when interpreting the finding of a P2 amplitude increase between recordings before and after training and learning. PMID:24314010

  9. The frequency modulated auditory evoked response (FMAER), a technical advance for study of childhood language disorders: cortical source localization and selected case studies

    PubMed Central

    2013-01-01

    Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings. PMID:23351174

  10. Testing the dual-pathway model for auditory processing in human cortex.

    PubMed

    Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto

    2016-01-01

    Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Early neural disruption and auditory processing outcomes in rodent models: implications for developmental language disability

    PubMed Central

    Fitch, R. Holly; Alexander, Michelle L.; Threlkeld, Steven W.

    2013-01-01

    Most researchers in the field of neural plasticity are familiar with the “Kennard Principle,” which purports a positive relationship between age at brain injury and severity of subsequent deficits (plateauing in adulthood). As an example, a child with left hemispherectomy can recover seemingly normal language, while an adult with focal injury to sub-regions of left temporal and/or frontal cortex can suffer dramatic and permanent language loss. Here we present data regarding the impact of early brain injury in rat models as a function of type and timing, measuring long-term behavioral outcomes via auditory discrimination tasks varying in temporal demand. These tasks were created to model (in rodents) aspects of human sensory processing that may correlate—both developmentally and functionally—with typical and atypical language. We found that bilateral focal lesions to the cortical plate in rats during active neuronal migration led to worse auditory outcomes than comparable lesions induced after cortical migration was complete. Conversely, unilateral hypoxic-ischemic (HI) injuries (similar to those seen in premature infants and term infants with birth complications) led to permanent auditory processing deficits when induced at a neurodevelopmental point comparable to human “term,” but only transient deficits (undetectable in adulthood) when induced in a “preterm” window. Convergent evidence suggests that regardless of when or how disruption of early neural development occurs, the consequences may be particularly deleterious to rapid auditory processing (RAP) outcomes when they trigger developmental alterations that extend into subcortical structures (i.e., lower sensory processing stations). Collective findings hold implications for the study of behavioral outcomes following early brain injury as well as genetic/environmental disruption, and are relevant to our understanding of the neurologic risk factors underlying developmental language disability in human populations. PMID:24155699

  12. Auditory-Cortex Short-Term Plasticity Induced by Selective Attention

    PubMed Central

    Jääskeläinen, Iiro P.; Ahveninen, Jyrki

    2014-01-01

    The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458

  13. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps.

    PubMed

    Sood, Mariam R; Sereno, Martin I

    2016-08-01

    Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  14. Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients.

    PubMed

    Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier

    2016-10-01

    Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Gender-specific effects of prenatal and adolescent exposure to tobacco smoke on auditory and visual attention.

    PubMed

    Jacobsen, Leslie K; Slotkin, Theodore A; Mencl, W Einar; Frost, Stephen J; Pugh, Kenneth R

    2007-12-01

    Prenatal exposure to active maternal tobacco smoking elevates risk of cognitive and auditory processing deficits, and of smoking in offspring. Recent preclinical work has demonstrated a sex-specific pattern of reduction in cortical cholinergic markers following prenatal, adolescent, or combined prenatal and adolescent exposure to nicotine, the primary psychoactive component of tobacco smoke. Given the importance of cortical cholinergic neurotransmission to attentional function, we examined auditory and visual selective and divided attention in 181 male and female adolescent smokers and nonsmokers with and without prenatal exposure to maternal smoking. Groups did not differ in age, educational attainment, symptoms of inattention, or years of parent education. A subset of 63 subjects also underwent functional magnetic resonance imaging while performing an auditory and visual selective and divided attention task. Among females, exposure to tobacco smoke during prenatal or adolescent development was associated with reductions in auditory and visual attention performance accuracy that were greatest in female smokers with prenatal exposure (combined exposure). Among males, combined exposure was associated with marked deficits in auditory attention, suggesting greater vulnerability of neurocircuitry supporting auditory attention to insult stemming from developmental exposure to tobacco smoke in males. Activation of brain regions that support auditory attention was greater in adolescents with prenatal or adolescent exposure to tobacco smoke relative to adolescents with neither prenatal nor adolescent exposure to tobacco smoke. These findings extend earlier preclinical work and suggest that, in humans, prenatal and adolescent exposure to nicotine exerts gender-specific deleterious effects on auditory and visual attention, with concomitant alterations in the efficiency of neurocircuitry supporting auditory attention.

  16. Activation of auditory cortex by anticipating and hearing emotional sounds: an MEG study.

    PubMed

    Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina

    2013-01-01

    To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.

  17. Activation of Auditory Cortex by Anticipating and Hearing Emotional Sounds: An MEG Study

    PubMed Central

    Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina

    2013-01-01

    To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period. PMID:24278270

  18. Serial and Parallel Processing in the Primate Auditory Cortex Revisited

    PubMed Central

    Recanzone, Gregg H.; Cohen, Yale E.

    2009-01-01

    Over a decade ago it was proposed that the primate auditory cortex is organized in a serial and parallel manner in which there is a dorsal stream processing spatial information and a ventral stream processing non-spatial information. This organization is similar to the “what”/“where” processing of the primate visual cortex. This review will examine several key studies, primarily electrophysiological, that have tested this hypothesis. We also review several human imaging studies that have attempted to define these processing streams in the human auditory cortex. While there is good evidence that spatial information is processed along a particular series of cortical areas, the support for a non-spatial processing stream is not as strong. Why this should be the case and how to better test this hypothesis is also discussed. PMID:19686779

  19. Auditory Cortical Processing in Real-World Listening: The Auditory System Going Real

    PubMed Central

    Bizley, Jennifer; Shamma, Shihab A.; Wang, Xiaoqin

    2014-01-01

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. PMID:25392481

  20. Auditory cortical processing in real-world listening: the auditory system going real.

    PubMed

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  1. Taking Attention Away from the Auditory Modality: Context-dependent Effects on Early Sensory Encoding of Speech.

    PubMed

    Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath

    2018-05-24

    Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Stability of the Cortical Sensory Waveforms, the P1-N1-P2 Complex and T-Complex, of Auditory Evoked Potentials

    ERIC Educational Resources Information Center

    Wagner, Monica; Shafer, Valerie L.; Haxhari, Evis; Kiprovski, Kevin; Behrmann, Katherine; Griffiths, Tara

    2017-01-01

    Purpose: Atypical cortical sensory waveforms reflecting impaired encoding of auditory stimuli may result from inconsistency in cortical response to the acoustic feature changes within spoken words. Thus, the present study assessed intrasubject stability of the P1-N1-P2 complex and T-complex to multiple productions of spoken nonwords in 48 adults…

  3. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    PubMed Central

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  4. Frontal Cortex Activation Causes Rapid Plasticity of Auditory Cortical Processing

    PubMed Central

    Winkowski, Daniel E.; Bandyopadhyay, Sharba; Shamma, Shihab A.

    2013-01-01

    Neurons in the primary auditory cortex (A1) can show rapid changes in receptive fields when animals are engaged in sound detection and discrimination tasks. The source of a signal to A1 that triggers these changes is suspected to be in frontal cortical areas. How or whether activity in frontal areas can influence activity and sensory processing in A1 and the detailed changes occurring in A1 on the level of single neurons and in neuronal populations remain uncertain. Using electrophysiological techniques in mice, we found that pairing orbitofrontal cortex (OFC) stimulation with sound stimuli caused rapid changes in the sound-driven activity within A1 that are largely mediated by noncholinergic mechanisms. By integrating in vivo two-photon Ca2+ imaging of A1 with OFC stimulation, we found that pairing OFC activity with sounds caused dynamic and selective changes in sensory responses of neural populations in A1. Further, analysis of changes in signal and noise correlation after OFC pairing revealed improvement in neural population-based discrimination performance within A1. This improvement was frequency specific and dependent on correlation changes. These OFC-induced influences on auditory responses resemble behavior-induced influences on auditory responses and demonstrate that OFC activity could underlie the coordination of rapid, dynamic changes in A1 to dynamic sensory environments. PMID:24227723

  5. Differential Receptive Field Properties of Parvalbumin and Somatostatin Inhibitory Neurons in Mouse Auditory Cortex.

    PubMed

    Li, Ling-Yun; Xiong, Xiaorui R; Ibrahim, Leena A; Yuan, Wei; Tao, Huizhong W; Zhang, Li I

    2015-07-01

    Cortical inhibitory circuits play important roles in shaping sensory processing. In auditory cortex, however, functional properties of genetically identified inhibitory neurons are poorly characterized. By two-photon imaging-guided recordings, we specifically targeted 2 major types of cortical inhibitory neuron, parvalbumin (PV) and somatostatin (SOM) expressing neurons, in superficial layers of mouse auditory cortex. We found that PV cells exhibited broader tonal receptive fields with lower intensity thresholds and stronger tone-evoked spike responses compared with SOM neurons. The latter exhibited similar frequency selectivity as excitatory neurons. The broader/weaker frequency tuning of PV neurons was attributed to a broader range of synaptic inputs and stronger subthreshold responses elicited, which resulted in a higher efficiency in the conversion of input to output. In addition, onsets of both the input and spike responses of SOM neurons were significantly delayed compared with PV and excitatory cells. Our results suggest that PV and SOM neurons engage in auditory cortical circuits in different manners: while PV neurons may provide broadly tuned feedforward inhibition for a rapid control of ascending inputs to excitatory neurons, the delayed and more selective inhibition from SOM neurons may provide a specific modulation of feedback inputs on their distal dendrites. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Effects of congruent and incongruent visual cues on speech perception and brain activity in cochlear implant users.

    PubMed

    Song, Jae-Jin; Lee, Hyo-Jeong; Kang, Hyejin; Lee, Dong Soo; Chang, Sun O; Oh, Seung Ha

    2015-03-01

    While deafness-induced plasticity has been investigated in the visual and auditory domains, not much is known about language processing in audiovisual multimodal environments for patients with restored hearing via cochlear implant (CI) devices. Here, we examined the effect of agreeing or conflicting visual inputs on auditory processing in deaf patients equipped with degraded artificial hearing. Ten post-lingually deafened CI users with good performance, along with matched control subjects, underwent H 2 (15) O-positron emission tomography scans while carrying out a behavioral task requiring the extraction of speech information from unimodal auditory stimuli, bimodal audiovisual congruent stimuli, and incongruent stimuli. Regardless of congruency, the control subjects demonstrated activation of the auditory and visual sensory cortices, as well as the superior temporal sulcus, the classical multisensory integration area, indicating a bottom-up multisensory processing strategy. Compared to CI users, the control subjects exhibited activation of the right ventral premotor-supramarginal pathway. In contrast, CI users activated primarily the visual cortices more in the congruent audiovisual condition than in the null condition. In addition, compared to controls, CI users displayed an activation focus in the right amygdala for congruent audiovisual stimuli. The most notable difference between the two groups was an activation focus in the left inferior frontal gyrus in CI users confronted with incongruent audiovisual stimuli, suggesting top-down cognitive modulation for audiovisual conflict. Correlation analysis revealed that good speech performance was positively correlated with right amygdala activity for the congruent condition, but negatively correlated with bilateral visual cortices regardless of congruency. Taken together these results suggest that for multimodal inputs, cochlear implant users are more vision-reliant when processing congruent stimuli and are disturbed more by visual distractors when confronted with incongruent audiovisual stimuli. To cope with this multimodal conflict, CI users activate the left inferior frontal gyrus to adopt a top-down cognitive modulation pathway, whereas normal hearing individuals primarily adopt a bottom-up strategy.

  7. Corticofugal modulation of time-domain processing of biosonar information in bats.

    PubMed

    Yan, J; Suga, N

    1996-08-23

    The Jamaican mustached bat has delay-tuned neurons in the inferior colliculus, medial geniculate body, and auditory cortex. The responses of these neurons to an echo are facilitated by a biosonar pulse emitted by the bat when the echo returns with a particular delay from a target located at a particular distance. Electrical stimulation of cortical delay-tuned neurons increases the delay-tuned responses of collicular neurons tuned to the same echo delay as the cortical neurons and decreases those of collicular neurons tuned to different echo delays. Cortical neurons improve information processing in the inferior colliculus by way of the corticocollicular projection.

  8. Characterizing the roles of alpha and theta oscillations in multisensory attention.

    PubMed

    Keller, Arielle S; Payne, Lisa; Sekuler, Robert

    2017-05-01

    Cortical alpha oscillations (8-13Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4-7Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta's association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Characterizing the roles of alpha and theta oscillations in multisensory attention

    PubMed Central

    Keller, Arielle S.; Payne, Lisa; Sekuler, Robert

    2017-01-01

    Cortical alpha oscillations (8–13 Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4–7 Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta’s association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. PMID:28259771

  10. Matrix metalloproteinase-9 deletion rescues auditory evoked potential habituation deficit in a mouse model of Fragile X Syndrome.

    PubMed

    Lovelace, Jonathan W; Wen, Teresa H; Reinhard, Sarah; Hsu, Mike S; Sidhu, Harpreet; Ethell, Iryna M; Binder, Devin K; Razak, Khaleel A

    2016-05-01

    Sensory processing deficits are common in autism spectrum disorders, but the underlying mechanisms are unclear. Fragile X Syndrome (FXS) is a leading genetic cause of intellectual disability and autism. Electrophysiological responses in humans with FXS show reduced habituation with sound repetition and this deficit may underlie auditory hypersensitivity in FXS. Our previous study in Fmr1 knockout (KO) mice revealed an unusually long state of increased sound-driven excitability in auditory cortical neurons suggesting that cortical responses to repeated sounds may exhibit abnormal habituation as in humans with FXS. Here, we tested this prediction by comparing cortical event related potentials (ERP) recorded from wildtype (WT) and Fmr1 KO mice. We report a repetition-rate dependent reduction in habituation of N1 amplitude in Fmr1 KO mice and show that matrix metalloproteinase-9 (MMP-9), one of the known FMRP targets, contributes to the reduced ERP habituation. Our studies demonstrate a significant up-regulation of MMP-9 levels in the auditory cortex of adult Fmr1 KO mice, whereas a genetic deletion of Mmp-9 reverses ERP habituation deficits in Fmr1 KO mice. Although the N1 amplitude of Mmp-9/Fmr1 DKO recordings was larger than WT and KO recordings, the habituation of ERPs in Mmp-9/Fmr1 DKO mice is similar to WT mice implicating MMP-9 as a potential target for reversing sensory processing deficits in FXS. Together these data establish ERP habituation as a translation relevant, physiological pre-clinical marker of auditory processing deficits in FXS and suggest that abnormal MMP-9 regulation is a mechanism underlying auditory hypersensitivity in FXS. Fragile X Syndrome (FXS) is the leading known genetic cause of autism spectrum disorders. Individuals with FXS show symptoms of auditory hypersensitivity. These symptoms may arise due to sustained neural responses to repeated sounds, but the underlying mechanisms remain unclear. For the first time, this study shows deficits in habituation of neural responses to repeated sounds in the Fmr1 KO mice as seen in humans with FXS. We also report an abnormally high level of matrix metalloprotease-9 (MMP-9) in the auditory cortex of Fmr1 KO mice and that deletion of Mmp-9 from Fmr1 KO mice reverses habituation deficits. These data provide a translation relevant electrophysiological biomarker for sensory deficits in FXS and implicate MMP-9 as a target for drug discovery. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Contextual modulation of primary visual cortex by auditory signals.

    PubMed

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  12. Contextual modulation of primary visual cortex by auditory signals

    PubMed Central

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  13. Assessment of anodal and cathodal transcranial direct current stimulation (tDCS) on MMN-indexed auditory sensory processing.

    PubMed

    Impey, Danielle; de la Salle, Sara; Knott, Verner

    2016-06-01

    Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a very weak constant current to temporarily excite (anodal stimulation) or inhibit (cathodal stimulation) activity in the brain area of interest via small electrodes placed on the scalp. Currently, tDCS of the frontal cortex is being used as a tool to investigate cognition in healthy controls and to improve symptoms in neurological and psychiatric patients. tDCS has been found to facilitate cognitive performance on measures of attention, memory, and frontal-executive functions. Recently, a short session of anodal tDCS over the temporal lobe has been shown to increase auditory sensory processing as indexed by the Mismatch Negativity (MMN) event-related potential (ERP). This preliminary pilot study examined the separate and interacting effects of both anodal and cathodal tDCS on MMN-indexed auditory pitch discrimination. In a randomized, double blind design, the MMN was assessed before (baseline) and after tDCS (2mA, 20min) in 2 separate sessions, one involving 'sham' stimulation (the device is turned off), followed by anodal stimulation (to temporarily excite cortical activity locally), and one involving cathodal stimulation (to temporarily decrease cortical activity locally), followed by anodal stimulation. Results demonstrated that anodal tDCS over the temporal cortex increased MMN-indexed auditory detection of pitch deviance, and while cathodal tDCS decreased auditory discrimination in baseline-stratified groups, subsequent anodal stimulation did not significantly alter MMN amplitudes. These findings strengthen the position that tDCS effects on cognition extend to the neural processing of sensory input and raise the possibility that this neuromodulatory technique may be useful for investigating sensory processing deficits in clinical populations. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. A role for descending auditory cortical projections in songbird vocal learning

    PubMed Central

    Mandelblat-Cerf, Yael; Las, Liora; Denisenko, Natalia; Fee, Michale S

    2014-01-01

    Many learned motor behaviors are acquired by comparing ongoing behavior with an internal representation of correct performance, rather than using an explicit external reward. For example, juvenile songbirds learn to sing by comparing their song with the memory of a tutor song. At present, the brain regions subserving song evaluation are not known. In this study, we report several findings suggesting that song evaluation involves an avian 'cortical' area previously shown to project to the dopaminergic midbrain and other downstream targets. We find that this ventral portion of the intermediate arcopallium (AIV) receives inputs from auditory cortical areas, and that lesions of AIV result in significant deficits in vocal learning. Additionally, AIV neurons exhibit fast responses to disruptive auditory feedback presented during singing, but not during nonsinging periods. Our findings suggest that auditory cortical areas may guide learning by transmitting song evaluation signals to the dopaminergic midbrain and/or other subcortical targets. DOI: http://dx.doi.org/10.7554/eLife.02152.001 PMID:24935934

  15. Prenatal thalamic waves regulate cortical area size prior to sensory processing.

    PubMed

    Moreno-Juan, Verónica; Filipchuk, Anton; Antón-Bolaños, Noelia; Mezzera, Cecilia; Gezelius, Henrik; Andrés, Belen; Rodríguez-Malmierca, Luis; Susín, Rafael; Schaad, Olivier; Iwasato, Takuji; Schüle, Roland; Rutlin, Michael; Nelson, Sacha; Ducret, Sebastien; Valdeolmillos, Miguel; Rijli, Filippo M; López-Bendito, Guillermina

    2017-02-03

    The cerebral cortex is organized into specialized sensory areas, whose initial territory is determined by intracortical molecular determinants. Yet, sensory cortical area size appears to be fine tuned during development to respond to functional adaptations. Here we demonstrate the existence of a prenatal sub-cortical mechanism that regulates the cortical areas size in mice. This mechanism is mediated by spontaneous thalamic calcium waves that propagate among sensory-modality thalamic nuclei up to the cortex and that provide a means of communication among sensory systems. Wave pattern alterations in one nucleus lead to changes in the pattern of the remaining ones, triggering changes in thalamic gene expression and cortical area size. Thus, silencing calcium waves in the auditory thalamus induces Rorβ upregulation in a neighbouring somatosensory nucleus preluding the enlargement of the barrel-field. These findings reveal that embryonic thalamic calcium waves coordinate cortical sensory area patterning and plasticity prior to sensory information processing.

  16. Prenatal thalamic waves regulate cortical area size prior to sensory processing

    PubMed Central

    Moreno-Juan, Verónica; Filipchuk, Anton; Antón-Bolaños, Noelia; Mezzera, Cecilia; Gezelius, Henrik; Andrés, Belen; Rodríguez-Malmierca, Luis; Susín, Rafael; Schaad, Olivier; Iwasato, Takuji; Schüle, Roland; Rutlin, Michael; Nelson, Sacha; Ducret, Sebastien; Valdeolmillos, Miguel; Rijli, Filippo M.; López-Bendito, Guillermina

    2017-01-01

    The cerebral cortex is organized into specialized sensory areas, whose initial territory is determined by intracortical molecular determinants. Yet, sensory cortical area size appears to be fine tuned during development to respond to functional adaptations. Here we demonstrate the existence of a prenatal sub-cortical mechanism that regulates the cortical areas size in mice. This mechanism is mediated by spontaneous thalamic calcium waves that propagate among sensory-modality thalamic nuclei up to the cortex and that provide a means of communication among sensory systems. Wave pattern alterations in one nucleus lead to changes in the pattern of the remaining ones, triggering changes in thalamic gene expression and cortical area size. Thus, silencing calcium waves in the auditory thalamus induces Rorβ upregulation in a neighbouring somatosensory nucleus preluding the enlargement of the barrel-field. These findings reveal that embryonic thalamic calcium waves coordinate cortical sensory area patterning and plasticity prior to sensory information processing. PMID:28155854

  17. Neurophysiological mechanisms of cortical plasticity impairments in schizophrenia and modulation by the NMDA receptor agonist D-serine

    PubMed Central

    Kantrowitz, Joshua T.; Epstein, Michael L.; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M.; Revheim, Nadine; Lehrfeld, Nayla P.; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C.

    2016-01-01

    Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time–frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908. Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. PMID:27913408

  18. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    PubMed

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Congenital deafness affects deep layers in primary and secondary auditory cortex

    PubMed Central

    Berger, Christoph; Kühne, Daniela; Scheper, Verena

    2017-01-01

    Abstract Congenital deafness leads to functional deficits in the auditory cortex for which early cochlear implantation can effectively compensate. Most of these deficits have been demonstrated functionally. Furthermore, the majority of previous studies on deafness have involved the primary auditory cortex; knowledge of higher‐order areas is limited to effects of cross‐modal reorganization. In this study, we compared the cortical cytoarchitecture of four cortical areas in adult hearing and congenitally deaf cats (CDCs): the primary auditory field A1, two secondary auditory fields, namely the dorsal zone and second auditory field (A2); and a reference visual association field (area 7) in the same section stained either using Nissl or SMI‐32 antibodies. The general cytoarchitectonic pattern and the area‐specific characteristics in the auditory cortex remained unchanged in animals with congenital deafness. Whereas area 7 did not differ between the groups investigated, all auditory fields were slightly thinner in CDCs, this being caused by reduced thickness of layers IV–VI. The study documents that, while the cytoarchitectonic patterns are in general independent of sensory experience, reduced layer thickness is observed in both primary and higher‐order auditory fields in layer IV and infragranular layers. The study demonstrates differences in effects of congenital deafness between supragranular and other cortical layers, but similar dystrophic effects in all investigated auditory fields. PMID:28643417

  20. Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.

    PubMed

    Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly

    2012-05-01

    Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas normally seen for vision. The differences in cortical organization between bilateral anophthalmia and other forms of congenital blindness are considered to be due to the total absence of stimulation in 'visual' cortex by light or retinal activity in the former condition, and suggests development of subcortical auditory input to the geniculo-striate pathway.

  1. Lifespan Differences in Cortical Dynamics of Auditory Perception

    ERIC Educational Resources Information Center

    Muller, Viktor; Gruber, Walter; Klimesch, Wolfgang; Lindenberger, Ulman

    2009-01-01

    Using electroencephalographic recordings (EEG), we assessed differences in oscillatory cortical activity during auditory-oddball performance between children aged 9-13 years, younger adults, and older adults. From childhood to old age, phase synchronization increased within and between electrodes, whereas whole power and evoked power decreased. We…

  2. Cortical activity patterns predict speech discrimination ability

    PubMed Central

    Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P

    2010-01-01

    Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123

  3. Dichotic listening in patients with splenial and nonsplenial callosal lesions.

    PubMed

    Pollmann, Stefan; Maertens, Marianne; von Cramon, D Yves; Lepsien, Joeran; Hugdahl, Kenneth

    2002-01-01

    The authors found splenial lesions to be associated with left ear suppression in dichotic listening of consonant-vowel syllables. This was found in both a rapid presentation dichotic monitoring task and a standard dichotic listening task, ruling out attentional limitations in the processing of high stimulus loads as a confounding factor. Moreover, directed attention to the left ear did not improve left ear target detection in the patients, independent of callosal lesion location. The authors' data may indicate that auditory callosal fibers pass through the splenium more posterior than previously thought. However, further studies should investigate whether callosal fibers between primary and secondary auditory cortices, or between higher level multimodal cortices, are vital for the detection of left ear targets in dichotic listening.

  4. Aging-related changes in auditory and visual integration measured with MEG

    PubMed Central

    Stephen, Julia M.; Knoefel, Janice E.; Adair, John; Hart, Blaine; Aine, Cheryl J.

    2010-01-01

    As noted in the aging literature, processing delays often occur in the central nervous system with increasing age, which is often attributable in part to demyelination. In addition, differential slowing between sensory systems has been shown to be most discrepant between visual (up to 20 ms) and auditory systems (< 5 ms). Therefore, we used MEG to measure the multisensory integration response in auditory association cortex in young and elderly participants to better understand the effects of aging on multisensory integration abilities. Results show a main effect for reaction times (RTs); the mean RTs of the elderly were significantly slower than the young. In addition, in the young we found significant facilitation of RTs to the multisensory stimuli relative to both unisensory stimuli, when comparing the cumulative distribution functions, which was not evident for the elderly. We also identified a significant interaction between age and condition in the superior temporal gyrus. In particular, the elderly had larger amplitude responses (~100 ms) to auditory stimuli relative to the young when auditory stimuli alone were presented, whereas the amplitude of responses to the multisensory stimuli was reduced in the elderly, relative to the young. This suppressed cortical multisensory integration response in the elderly, which corresponded with slower RTs and reduced RT facilitation effects in the elderly, has not been reported previously and may be related to poor cortical integration based on timing changes in unisensory processing in the elderly. PMID:20713130

  5. Aging-related changes in auditory and visual integration measured with MEG.

    PubMed

    Stephen, Julia M; Knoefel, Janice E; Adair, John; Hart, Blaine; Aine, Cheryl J

    2010-10-22

    As noted in the aging literature, processing delays often occur in the central nervous system with increasing age, which is often attributable in part to demyelination. In addition, differential slowing between sensory systems has been shown to be most discrepant between visual (up to 20ms) and auditory systems (<5ms). Therefore, we used MEG to measure the multisensory integration response in auditory association cortex in young and elderly participants to better understand the effects of aging on multisensory integration abilities. Results show a main effect for reaction times (RTs); the mean RTs of the elderly were significantly slower than the young. In addition, in the young we found significant facilitation of RTs to the multisensory stimuli relative to both unisensory stimuli, when comparing the cumulative distribution functions, which was not evident for the elderly. We also identified a significant interaction between age and condition in the superior temporal gyrus. In particular, the elderly had larger amplitude responses (∼100ms) to auditory stimuli relative to the young when auditory stimuli alone were presented, whereas the amplitude of responses to the multisensory stimuli was reduced in the elderly, relative to the young. This suppressed cortical multisensory integration response in the elderly, which corresponded with slower RTs and reduced RT facilitation effects, has not been reported previously and may be related to poor cortical integration based on timing changes in unisensory processing in the elderly. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Analysis of stimulus-related activity in rat auditory cortex using complex spectral coefficients

    PubMed Central

    Krause, Bryan M.

    2013-01-01

    The neural mechanisms of sensory responses recorded from the scalp or cortical surface remain controversial. Evoked vs. induced response components (i.e., changes in mean vs. variance) are associated with bottom-up vs. top-down processing, but trial-by-trial response variability can confound this interpretation. Phase reset of ongoing oscillations has also been postulated to contribute to sensory responses. In this article, we present evidence that responses under passive listening conditions are dominated by variable evoked response components. We measured the mean, variance, and phase of complex time-frequency coefficients of epidurally recorded responses to acoustic stimuli in rats. During the stimulus, changes in mean, variance, and phase tended to co-occur. After the stimulus, there was a small, low-frequency offset response in the mean and modest, prolonged desynchronization in the alpha band. Simulations showed that trial-by-trial variability in the mean can account for most of the variance and phase changes observed during the stimulus. This variability was state dependent, with smallest variability during periods of greatest arousal. Our data suggest that cortical responses to auditory stimuli reflect variable inputs to the cortical network. These analyses suggest that caution should be exercised when interpreting variance and phase changes in terms of top-down cortical processing. PMID:23657279

  7. Histone Deacetylase Inhibition via RGFP966 Releases the Brakes on Sensory Cortical Plasticity and the Specificity of Memory Formation.

    PubMed

    Bieszczad, Kasia M; Bechay, Kiro; Rusche, James R; Jacques, Vincent; Kudugunti, Shashi; Miao, Wenyan; Weinberger, Norman M; McGaugh, James L; Wood, Marcelo A

    2015-09-23

    Research over the past decade indicates a novel role for epigenetic mechanisms in memory formation. Of particular interest is chromatin modification by histone deacetylases (HDACs), which, in general, negatively regulate transcription. HDAC deletion or inhibition facilitates transcription during memory consolidation and enhances long-lasting forms of synaptic plasticity and long-term memory. A key open question remains: How does blocking HDAC activity lead to memory enhancements? To address this question, we tested whether a normal function of HDACs is to gate information processing during memory formation. We used a class I HDAC inhibitor, RGFP966 (C21H19FN4O), to test the role of HDAC inhibition for information processing in an auditory memory model of learning-induced cortical plasticity. HDAC inhibition may act beyond memory enhancement per se to instead regulate information in ways that lead to encoding more vivid sensory details into memory. Indeed, we found that RGFP966 controls memory induction for acoustic details of sound-to-reward learning. Rats treated with RGFP966 while learning to associate sound with reward had stronger memory and additional information encoded into memory for highly specific features of sounds associated with reward. Moreover, behavioral effects occurred with unusually specific plasticity in primary auditory cortex (A1). Class I HDAC inhibition appears to engage A1 plasticity that enables additional acoustic features to become encoded in memory. Thus, epigenetic mechanisms act to regulate sensory cortical plasticity, which offers an information processing mechanism for gating what and how much is encoded to produce exceptionally persistent and vivid memories. Significance statement: Here we provide evidence of an epigenetic mechanism for information processing. The study reveals that a class I HDAC inhibitor (Malvaez et al., 2013; Rumbaugh et al., 2015; RGFP966, chemical formula C21H19FN4O) alters the formation of auditory memory by enabling more acoustic information to become encoded into memory. Moreover, RGFP966 appears to affect cortical plasticity: the primary auditory cortex reorganized in a manner that was unusually "tuned-in" to the specific sound cues and acoustic features that were related to reward and subsequently remembered. We propose that HDACs control "informational capture" at a systems level for what and how much information is encoded by gating sensory cortical plasticity that underlies the sensory richness of newly formed memories. Copyright © 2015 the authors 0270-6474/15/3513125-09$15.00/0.

  8. Histone Deacetylase Inhibition via RGFP966 Releases the Brakes on Sensory Cortical Plasticity and the Specificity of Memory Formation

    PubMed Central

    Bechay, Kiro; Rusche, James R.; Jacques, Vincent; Kudugunti, Shashi; Miao, Wenyan; Weinberger, Norman M.; McGaugh, James L.

    2015-01-01

    Research over the past decade indicates a novel role for epigenetic mechanisms in memory formation. Of particular interest is chromatin modification by histone deacetylases (HDACs), which, in general, negatively regulate transcription. HDAC deletion or inhibition facilitates transcription during memory consolidation and enhances long-lasting forms of synaptic plasticity and long-term memory. A key open question remains: How does blocking HDAC activity lead to memory enhancements? To address this question, we tested whether a normal function of HDACs is to gate information processing during memory formation. We used a class I HDAC inhibitor, RGFP966 (C21H19FN4O), to test the role of HDAC inhibition for information processing in an auditory memory model of learning-induced cortical plasticity. HDAC inhibition may act beyond memory enhancement per se to instead regulate information in ways that lead to encoding more vivid sensory details into memory. Indeed, we found that RGFP966 controls memory induction for acoustic details of sound-to-reward learning. Rats treated with RGFP966 while learning to associate sound with reward had stronger memory and additional information encoded into memory for highly specific features of sounds associated with reward. Moreover, behavioral effects occurred with unusually specific plasticity in primary auditory cortex (A1). Class I HDAC inhibition appears to engage A1 plasticity that enables additional acoustic features to become encoded in memory. Thus, epigenetic mechanisms act to regulate sensory cortical plasticity, which offers an information processing mechanism for gating what and how much is encoded to produce exceptionally persistent and vivid memories. SIGNIFICANCE STATEMENT Here we provide evidence of an epigenetic mechanism for information processing. The study reveals that a class I HDAC inhibitor (Malvaez et al., 2013; Rumbaugh et al., 2015; RGFP966, chemical formula C21H19FN4O) alters the formation of auditory memory by enabling more acoustic information to become encoded into memory. Moreover, RGFP966 appears to affect cortical plasticity: the primary auditory cortex reorganized in a manner that was unusually “tuned-in” to the specific sound cues and acoustic features that were related to reward and subsequently remembered. We propose that HDACs control “informational capture” at a systems level for what and how much information is encoded by gating sensory cortical plasticity that underlies the sensory richness of newly formed memories. PMID:26400942

  9. Tinnitus. I: Auditory mechanisms: a model for tinnitus and hearing impairment.

    PubMed

    Hazell, J W; Jastreboff, P J

    1990-02-01

    A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.

  10. Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success.

    PubMed

    Campbell, Ruth; MacSweeney, Mairéad; Woll, Bencie

    2014-01-01

    Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections-including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant-however that may be achieved, and whatever the success of auditory restoration.

  11. Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success

    PubMed Central

    Campbell, Ruth; MacSweeney, Mairéad; Woll, Bencie

    2014-01-01

    Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections—including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant—however that may be achieved, and whatever the success of auditory restoration. PMID:25368567

  12. Distributed Processing and Cortical Specialization for Speech and Environmental Sounds in Human Temporal Cortex

    ERIC Educational Resources Information Center

    Leech, Robert; Saygin, Ayse Pinar

    2011-01-01

    Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial…

  13. Cortical substrates and functional correlates of auditory deviance processing deficits in schizophrenia

    PubMed Central

    Rissling, Anthony J.; Miyakoshi, Makoto; Sugar, Catherine A.; Braff, David L.; Makeig, Scott; Light, Gregory A.

    2014-01-01

    Although sensory processing abnormalities contribute to widespread cognitive and psychosocial impairments in schizophrenia (SZ) patients, scalp-channel measures of averaged event-related potentials (ERPs) mix contributions from distinct cortical source-area generators, diluting the functional relevance of channel-based ERP measures. SZ patients (n = 42) and non-psychiatric comparison subjects (n = 47) participated in a passive auditory duration oddball paradigm, eliciting a triphasic (Deviant−Standard) tone ERP difference complex, here termed the auditory deviance response (ADR), comprised of a mid-frontal mismatch negativity (MMN), P3a positivity, and re-orienting negativity (RON) peak sequence. To identify its cortical sources and to assess possible relationships between their response contributions and clinical SZ measures, we applied independent component analysis to the continuous 68-channel EEG data and clustered the resulting independent components (ICs) across subjects on spectral, ERP, and topographic similarities. Six IC clusters centered in right superior temporal, right inferior frontal, ventral mid-cingulate, anterior cingulate, medial orbitofrontal, and dorsal mid-cingulate cortex each made triphasic response contributions. Although correlations between measures of SZ clinical, cognitive, and psychosocial functioning and standard (Fz) scalp-channel ADR peak measures were weak or absent, for at least four IC clusters one or more significant correlations emerged. In particular, differences in MMN peak amplitude in the right superior temporal IC cluster accounted for 48% of the variance in SZ-subject performance on tasks necessary for real-world functioning and medial orbitofrontal cluster P3a amplitude accounted for 40%/54% of SZ-subject variance in positive/negative symptoms. Thus, source-resolved auditory deviance response measures including MMN may be highly sensitive to SZ clinical, cognitive, and functional characteristics. PMID:25379456

  14. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    PubMed

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  15. Neural mechanisms underlying auditory feedback control of speech

    PubMed Central

    Reilly, Kevin J.; Guenther, Frank H.

    2013-01-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557

  16. Knockout Mice for Dyslexia Susceptibility Gene Homologs KIAA0319 and KIAA0319L have Unaffected Neuronal Migration but Display Abnormal Auditory Processing

    PubMed Central

    Guidi, Luiz G; Mattley, Jane; Martinez-Garay, Isabel; Monaco, Anthony P; Linden, Jennifer F; Velayos-Baeza, Antonio

    2017-01-01

    Abstract Developmental dyslexia is a neurodevelopmental disorder that affects reading ability caused by genetic and non-genetic factors. Amongst the susceptibility genes identified to date, KIAA0319 is a prime candidate. RNA-interference experiments in rats suggested its involvement in cortical migration but we could not confirm these findings in Kiaa0319-mutant mice. Given its homologous gene Kiaa0319L (AU040320) has also been proposed to play a role in neuronal migration, we interrogated whether absence of AU040320 alone or together with KIAA0319 affects migration in the developing brain. Analyses of AU040320 and double Kiaa0319;AU040320 knockouts (dKO) revealed no evidence for impaired cortical lamination, neuronal migration, neurogenesis or other anatomical abnormalities. However, dKO mice displayed an auditory deficit in a behavioral gap-in-noise detection task. In addition, recordings of click-evoked auditory brainstem responses revealed suprathreshold deficits in wave III amplitude in AU040320-KO mice, and more general deficits in dKOs. These findings suggest that absence of AU040320 disrupts firing and/or synchrony of activity in the auditory brainstem, while loss of both proteins might affect both peripheral and central auditory function. Overall, these results stand against the proposed role of KIAA0319 and AU040320 in neuronal migration and outline their relationship with deficits in the auditory system. PMID:29045729

  17. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    PubMed

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  18. Shaping the aging brain: role of auditory input patterns in the emergence of auditory cortical impairments

    PubMed Central

    Kamal, Brishna; Holman, Constance; de Villers-Sidani, Etienne

    2013-01-01

    Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function. PMID:24062649

  19. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    PubMed

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  20. Auditory Responses and Stimulus-Specific Adaptation in Rat Auditory Cortex are Preserved Across NREM and REM Sleep

    PubMed Central

    Nir, Yuval; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Banks, Matthew I.; Tononi, Giulio

    2015-01-01

    Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic “gate,” which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences <8% between states). The processing of deviant tones was also compared in sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13–20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas. PMID:24323498

  1. Music training relates to the development of neural mechanisms of selective auditory attention.

    PubMed

    Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina

    2015-04-01

    Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Theta band oscillations reflect more than entrainment: behavioral and neural evidence demonstrates an active chunking process.

    PubMed

    Teng, Xiangbin; Tian, Xing; Doelling, Keith; Poeppel, David

    2017-10-17

    Parsing continuous acoustic streams into perceptual units is fundamental to auditory perception. Previous studies have uncovered a cortical entrainment mechanism in the delta and theta bands (~1-8 Hz) that correlates with formation of perceptual units in speech, music, and other quasi-rhythmic stimuli. Whether cortical oscillations in the delta-theta bands are passively entrained by regular acoustic patterns or play an active role in parsing the acoustic stream is debated. Here, we investigate cortical oscillations using novel stimuli with 1/f modulation spectra. These 1/f signals have no rhythmic structure but contain information over many timescales because of their broadband modulation characteristics. We chose 1/f modulation spectra with varying exponents of f, which simulate the dynamics of environmental noise, speech, vocalizations, and music. While undergoing magnetoencephalography (MEG) recording, participants listened to 1/f stimuli and detected embedded target tones. Tone detection performance varied across stimuli of different exponents and can be explained by local signal-to-noise ratio computed using a temporal window around 200 ms. Furthermore, theta band oscillations, surprisingly, were observed for all stimuli, but robust phase coherence was preferentially displayed by stimuli with exponents 1 and 1.5. We constructed an auditory processing model to quantify acoustic information on various timescales and correlated the model outputs with the neural results. We show that cortical oscillations reflect a chunking of segments, > 200 ms. These results suggest an active auditory segmentation mechanism, complementary to entrainment, operating on a timescale of ~200 ms to organize acoustic information. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Spectral context affects temporal processing in awake auditory cortex

    PubMed Central

    Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.

    2013-01-01

    Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811

  4. Rhythmic and melodic deviations in musical sequences recruit different cortical areas for mismatch detection.

    PubMed

    Lappe, Claudia; Steinsträter, Olaf; Pantev, Christo

    2013-01-01

    The mismatch negativity (MMN), an event-related potential (ERP) representing the violation of an acoustic regularity, is considered as a pre-attentive change detection mechanism at the sensory level on the one hand and as a prediction error signal on the other hand, suggesting that bottom-up as well as top-down processes are involved in its generation. Rhythmic and melodic deviations within a musical sequence elicit a MMN in musically trained subjects, indicating that acquired musical expertise leads to better discrimination accuracy of musical material and better predictions about upcoming musical events. Expectation violations to musical material could therefore recruit neural generators that reflect top-down processes that are based on musical knowledge. We describe the neural generators of the musical MMN for rhythmic and melodic material after a short-term sensorimotor-auditory (SA) training. We compare the localization of musical MMN data from two previous MEG studies by applying beamformer analysis. One study focused on the melodic harmonic progression whereas the other study focused on rhythmic progression. The MMN to melodic deviations revealed significant right hemispheric neural activation in the superior temporal gyrus (STG), inferior frontal cortex (IFC), and the superior frontal (SFG) and orbitofrontal (OFG) gyri. IFC and SFG activation was also observed in the left hemisphere. In contrast, beamformer analysis of the data from the rhythm study revealed bilateral activation within the vicinity of auditory cortices and in the inferior parietal lobule (IPL), an area that has recently been implied in temporal processing. We conclude that different cortical networks are activated in the analysis of the temporal and the melodic content of musical material, and discuss these networks in the context of the dual-pathway model of auditory processing.

  5. Neuromimetic Sound Representation for Percept Detection and Manipulation

    NASA Astrophysics Data System (ADS)

    Zotkin, Dmitry N.; Chi, Taishih; Shamma, Shihab A.; Duraiswami, Ramani

    2005-12-01

    The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at [InlineEquation not available: see fulltext.]). Work on bringing the algorithms into the real-time processing domain is ongoing.

  6. Speech acquisition predicts regions of enhanced cortical response to auditory stimulation in autism spectrum individuals.

    PubMed

    Samson, F; Zeffiro, T A; Doyon, J; Benali, H; Mottron, L

    2015-09-01

    A continuum of phenotypes makes up the autism spectrum (AS). In particular, individuals show large differences in language acquisition, ranging from precocious speech to severe speech onset delay. However, the neurological origin of this heterogeneity remains unknown. Here, we sought to determine whether AS individuals differing in speech acquisition show different cortical responses to auditory stimulation and morphometric brain differences. Whole-brain activity following exposure to non-social sounds was investigated. Individuals in the AS were classified according to the presence or absence of Speech Onset Delay (AS-SOD and AS-NoSOD, respectively) and were compared with IQ-matched typically developing individuals (TYP). AS-NoSOD participants displayed greater task-related activity than TYP in the inferior frontal gyrus and peri-auditory middle and superior temporal gyri, which are associated with language processing. Conversely, the AS-SOD group only showed enhanced activity in the vicinity of the auditory cortex. We detected no differences in brain structure between groups. This is the first study to demonstrate the existence of differences in functional brain activity between AS individuals divided according to their pattern of speech development. These findings support the Trigger-threshold-target model and indicate that the occurrence of speech onset delay in AS individuals depends on the location of cortical functional reallocation, which favors perception in AS-SOD and language in AS-NoSOD. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. A quantitative comparison of the hemispheric, areal, and laminar origins of sensory and motor cortical projections to the superior colliculus of the cat.

    PubMed

    Butler, Blake E; Chabot, Nicole; Lomber, Stephen G

    2016-09-01

    The superior colliculus (SC) is a midbrain structure central to orienting behaviors. The organization of descending projections from sensory cortices to the SC has garnered much attention; however, rarely have projections from multiple modalities been quantified and contrasted, allowing for meaningful conclusions within a single species. Here, we examine corticotectal projections from visual, auditory, somatosensory, motor, and limbic cortices via retrograde pathway tracers injected throughout the superficial and deep layers of the cat SC. As anticipated, the majority of cortical inputs to the SC originate in the visual cortex. In fact, each field implicated in visual orienting behavior makes a substantial projection. Conversely, only one area of the auditory orienting system, the auditory field of the anterior ectosylvian sulcus (fAES), and no area involved in somatosensory orienting, shows significant corticotectal inputs. Although small relative to visual inputs, the projection from the fAES is of particular interest, as it represents the only bilateral cortical input to the SC. This detailed, quantitative study allows for comparison across modalities in an animal that serves as a useful model for both auditory and visual perception. Moreover, the differences in patterns of corticotectal projections between modalities inform the ways in which orienting systems are modulated by cortical feedback. J. Comp. Neurol. 524:2623-2642, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. The effects of neck flexion on cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in related sensory cortices

    PubMed Central

    2012-01-01

    Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306

  9. Auditory cortex of bats and primates: managing species-specific calls for social communication

    PubMed Central

    Kanwal, Jagmeet S.; Rauschecker, Josef P.

    2014-01-01

    Individuals of many animal species communicate with each other using sounds or “calls” that are made up of basic acoustic patterns and their combinations. We are interested in questions about the processing of communication calls and their representation within the mammalian auditory cortex. Our studies compare in particular two species for which a large body of data has accumulated: the mustached bat and the rhesus monkey. We conclude that the brains of both species share a number of functional and organizational principles, which differ only in the extent to which and how they are implemented. For instance, neurons in both species use “combination-sensitivity” (nonlinear spectral and temporal integration of stimulus components) as a basic mechanism to enable exquisite sensitivity to and selectivity for particular call types. Whereas combination-sensitivity is already found abundantly at the primary auditory cortical and also at subcortical levels in bats, it becomes prevalent only at the level of the lateral belt in the secondary auditory cortex of monkeys. A parallel-hierarchical framework for processing complex sounds up to the level of the auditory cortex in bats and an organization into parallel-hierarchical, cortico-cortical auditory processing streams in monkeys is another common principle. Response specialization of neurons seems to be more pronounced in bats than in monkeys, whereas a functional specialization into “what” and “where” streams in the cerebral cortex is more pronounced in monkeys than in bats. These differences, in part, are due to the increased number and larger size of auditory areas in the parietal and frontal cortex in primates. Accordingly, the computational prowess of neural networks and the functional hierarchy resulting in specializations is established early and accelerated across brain regions in bats. The principles proposed here for the neural “management” of species-specific calls in bats and primates can be tested by studying the details of call processing in additional species. Also, computational modeling in conjunction with coordinated studies in bats and monkeys can help to clarify the fundamental question of perceptual invariance (or “constancy”) in call recognition, which has obvious relevance for understanding speech perception and its disorders in humans. PMID:17485400

  10. Selective verbal recognition memory impairments are associated with atrophy of the language network in non-semantic variants of primary progressive aphasia.

    PubMed

    Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J

    2017-06-01

    Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Speech-evoked activation in adult temporal cortex measured using functional near-infrared spectroscopy (fNIRS): Are the measurements reliable?

    PubMed

    Wiggins, Ian M; Anderson, Carly A; Kitterick, Pádraig T; Hartley, Douglas E H

    2016-09-01

    Functional near-infrared spectroscopy (fNIRS) is a silent, non-invasive neuroimaging technique that is potentially well suited to auditory research. However, the reliability of auditory-evoked activation measured using fNIRS is largely unknown. The present study investigated the test-retest reliability of speech-evoked fNIRS responses in normally-hearing adults. Seventeen participants underwent fNIRS imaging in two sessions separated by three months. In a block design, participants were presented with auditory speech, visual speech (silent speechreading), and audiovisual speech conditions. Optode arrays were placed bilaterally over the temporal lobes, targeting auditory brain regions. A range of established metrics was used to quantify the reproducibility of cortical activation patterns, as well as the amplitude and time course of the haemodynamic response within predefined regions of interest. The use of a signal processing algorithm designed to reduce the influence of systemic physiological signals was found to be crucial to achieving reliable detection of significant activation at the group level. For auditory speech (with or without visual cues), reliability was good to excellent at the group level, but highly variable among individuals. Temporal-lobe activation in response to visual speech was less reliable, especially in the right hemisphere. Consistent with previous reports, fNIRS reliability was improved by averaging across a small number of channels overlying a cortical region of interest. Overall, the present results confirm that fNIRS can measure speech-evoked auditory responses in adults that are highly reliable at the group level, and indicate that signal processing to reduce physiological noise may substantially improve the reliability of fNIRS measurements. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Delays in auditory processing identified in preschool children with FASD

    PubMed Central

    Stephen, Julia M.; Kodituwakku, Piyadasa W.; Kodituwakku, Elizabeth L.; Romero, Lucinda; Peters, Amanda M.; Sharadamma, Nirupama Muniswamy; Caprihan, Arvind; Coffman, Brian A.

    2012-01-01

    Background Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool aged children. Since sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Materials and Methods Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control children aged 3-6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multi-dipole spatio-temporal modeling technique (CSST – Ranken et al. 2002) to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. Results There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Discussion Auditory delay revealed by MEG in children with FASD may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. PMID:22458372

  13. Delays in auditory processing identified in preschool children with FASD.

    PubMed

    Stephen, Julia M; Kodituwakku, Piyadasa W; Kodituwakku, Elizabeth L; Romero, Lucinda; Peters, Amanda M; Sharadamma, Nirupama M; Caprihan, Arvind; Coffman, Brian A

    2012-10-01

    Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool-aged children. As sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control (HC) children aged 3 to 6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1,000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multidipole spatio-temporal modeling technique to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Auditory delay revealed by MEG in children with FASDs may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. Copyright © 2012 by the Research Society on Alcoholism.

  14. Hemispheric asymmetry in auditory processing of speech envelope modulations in prereading children.

    PubMed

    Vanvooren, Sophie; Poelmans, Hanne; Hofmann, Michael; Ghesquière, Pol; Wouters, Jan

    2014-01-22

    The temporal envelope of speech is an important cue contributing to speech intelligibility. Theories about the neural foundations of speech perception postulate that the left and right auditory cortices are functionally specialized in analyzing speech envelope information at different time scales: the right hemisphere is thought to be specialized in processing syllable rate modulations, whereas a bilateral or left hemispheric specialization is assumed for phoneme rate modulations. Recently, it has been found that this functional hemispheric asymmetry is different in individuals with language-related disorders such as dyslexia. Most studies were, however, performed in adults and school-aged children, and only a little is known about how neural auditory processing at these specific rates manifests and develops in very young children before reading acquisition. Yet, studying hemispheric specialization for processing syllable and phoneme rate modulations in preliterate children may reveal early neural markers for dyslexia. In the present study, human cortical evoked potentials to syllable and phoneme rate modulations were measured in 5-year-old children at high and low hereditary risk for dyslexia. The results demonstrate a right hemispheric preference for processing syllable rate modulations and a symmetric pattern for phoneme rate modulations, regardless of hereditary risk for dyslexia. These results suggest that, while hemispheric specialization for processing syllable rate modulations seems to be mature in prereading children, hemispheric specialization for phoneme rate modulation processing may still be developing. These findings could have important implications for the development of phonological and reading skills.

  15. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

    PubMed Central

    Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038

  16. Inhibition of histone deacetylase 3 via RGFP966 facilitates cortical plasticity underlying unusually accurate auditory associative cue memory for excitatory and inhibitory cue-reward associations.

    PubMed

    Shang, Andrea; Bylipudi, Sooraz; Bieszczad, Kasia M

    2018-05-31

    Epigenetic mechanisms are key for regulating long-term memory (LTM) and are known to exert control on memory formation in multiple systems of the adult brain, including the sensory cortex. One epigenetic mechanism is chromatin modification by histone acetylation. Blocking the action of histone de-acetylases (HDACs) that normally negatively regulate LTM by repressing transcription has been shown to enable memory formation. Indeed, HDAC inhibition appears to facilitate memory by altering the dynamics of gene expression events important for memory consolidation. However, less understood are the ways in which molecular-level consolidation processes alter subsequent memory to enhance storage or facilitate retrieval. Here we used a sensory perspective to investigate whether the characteristics of memory formed with HDAC inhibitors are different from naturally-formed memory. One possibility is that HDAC inhibition enables memory to form with greater sensory detail than normal. Because the auditory system undergoes learning-induced remodeling that provides substrates for sound-specific LTM, we aimed to identify behavioral effects of HDAC inhibition on memory for specific sound features using a standard model of auditory associative cue-reward learning, memory, and cortical plasticity. We found that three systemic post-training treatments of an HDAC3-inhibitor (RGPF966, Abcam Inc.) in rats in the early phase of training facilitated auditory discriminative learning, changed auditory cortical tuning, and increased the specificity for acoustic frequency formed in memory of both excitatory (S+) and inhibitory (S-) associations for at least 2 weeks. The findings support that epigenetic mechanisms act on neural and behavioral sensory acuity to increase the precision of associative cue memory, which can be revealed by studying the sensory characteristics of long-term associative memory formation with HDAC inhibitors. Published by Elsevier B.V.

  17. 22q11.2 Deletion Syndrome Is Associated With Impaired Auditory Steady-State Gamma Response

    PubMed Central

    Pellegrino, Giovanni; Birknow, Michelle Rosgaard; Kjær, Trine Nørgaard; Baaré, William Frans Christiaan; Didriksen, Michael; Olsen, Line; Werge, Thomas; Mørup, Morten; Siebner, Hartwig Roman

    2018-01-01

    Abstract Background The 22q11.2 deletion syndrome confers a markedly increased risk for schizophrenia. 22q11.2 deletion carriers without manifest psychotic disorder offer the possibility to identify functional abnormalities that precede clinical onset. Since schizophrenia is associated with a reduced cortical gamma response to auditory stimulation at 40 Hz, we hypothesized that the 40 Hz auditory steady-state response (ASSR) may be attenuated in nonpsychotic individuals with a 22q11.2 deletion. Methods Eighteen young nonpsychotic 22q11.2 deletion carriers and a control group of 27 noncarriers with comparable age range (12–25 years) and sex ratio underwent 128-channel EEG. We recorded the cortical ASSR to a 40 Hz train of clicks, given either at a regular inter-stimulus interval of 25 ms or at irregular intervals jittered between 11 and 37 ms. Results Healthy noncarriers expressed a stable ASSR to regular but not in the irregular 40 Hz click stimulation. Both gamma power and inter-trial phase coherence of the ASSR were markedly reduced in the 22q11.2 deletion group. The ability to phase lock cortical gamma activity to regular auditory 40 Hz stimulation correlated with the individual expression of negative symptoms in deletion carriers (ρ = −0.487, P = .041). Conclusions Nonpsychotic 22q11.2 deletion carriers lack efficient phase locking of evoked gamma activity to regular 40 Hz auditory stimulation. This abnormality indicates a dysfunction of fast intracortical oscillatory processing in the gamma-band. Since ASSR was attenuated in nonpsychotic deletion carriers, ASSR deficiency may constitute a premorbid risk marker of schizophrenia. PMID:28521049

  18. Modulation of auditory processing during speech movement planning is limited in adults who stutter

    PubMed Central

    Daliri, Ayoub; Max, Ludo

    2015-01-01

    Stuttering is associated with atypical structural and functional connectivity in sensorimotor brain areas, in particular premotor, motor, and auditory regions. It remains unknown, however, which specific mechanisms of speech planning and execution are affected by these neurological abnormalities. To investigate pre-movement sensory modulation, we recorded 12 stuttering and 12 nonstuttering adults’ auditory evoked potentials in response to probe tones presented prior to speech onset in a delayed-response speaking condition vs. no-speaking control conditions (silent reading; seeing nonlinguistic symbols). Findings indicate that, during speech movement planning, the nonstuttering group showed a statistically significant modulation of auditory processing (reduced N1 amplitude) that was not observed in the stuttering group. Thus, the obtained results provide electrophysiological evidence in support of the hypothesis that stuttering is associated with deficiencies in modulating the cortical auditory system during speech movement planning. This specific sensorimotor integration deficiency may contribute to inefficient feedback monitoring and, consequently, speech dysfluencies. PMID:25796060

  19. Towards neural correlates of auditory stimulus processing: A simultaneous auditory evoked potentials and functional magnetic resonance study using an odd-ball paradigm

    PubMed Central

    Milner, Rafał; Rusiniak, Mateusz; Lewandowska, Monika; Wolak, Tomasz; Ganc, Małgorzata; Piątkowska-Janko, Ewa; Bogorodzki, Piotr; Skarżyński, Henryk

    2014-01-01

    Background The neural underpinnings of auditory information processing have often been investigated using the odd-ball paradigm, in which infrequent sounds (deviants) are presented within a regular train of frequent stimuli (standards). Traditionally, this paradigm has been applied using either high temporal resolution (EEG) or high spatial resolution (fMRI, PET). However, used separately, these techniques cannot provide information on both the location and time course of particular neural processes. The goal of this study was to investigate the neural correlates of auditory processes with a fine spatio-temporal resolution. A simultaneous auditory evoked potentials (AEP) and functional magnetic resonance imaging (fMRI) technique (AEP-fMRI), together with an odd-ball paradigm, were used. Material/Methods Six healthy volunteers, aged 20–35 years, participated in an odd-ball simultaneous AEP-fMRI experiment. AEP in response to acoustic stimuli were used to model bioelectric intracerebral generators, and electrophysiological results were integrated with fMRI data. Results fMRI activation evoked by standard stimuli was found to occur mainly in the primary auditory cortex. Activity in these regions overlapped with intracerebral bioelectric sources (dipoles) of the N1 component. Dipoles of the N1/P2 complex in response to standard stimuli were also found in the auditory pathway between the thalamus and the auditory cortex. Deviant stimuli induced fMRI activity in the anterior cingulate gyrus, insula, and parietal lobes. Conclusions The present study showed that neural processes evoked by standard stimuli occur predominantly in subcortical and cortical structures of the auditory pathway. Deviants activate areas non-specific for auditory information processing. PMID:24413019

  20. Origins of thalamic and cortical projections to the posterior auditory field in congenitally deaf cats.

    PubMed

    Butler, Blake E; Chabot, Nicole; Kral, Andrej; Lomber, Stephen G

    2017-01-01

    Crossmodal plasticity takes place following sensory loss, such that areas that normally process the missing modality are reorganized to provide compensatory function in the remaining sensory systems. For example, congenitally deaf cats outperform normal hearing animals on localization of visual stimuli presented in the periphery, and this advantage has been shown to be mediated by the posterior auditory field (PAF). In order to determine the nature of the anatomical differences that underlie this phenomenon, we injected a retrograde tracer into PAF of congenitally deaf animals and quantified the thalamic and cortical projections to this field. The pattern of projections from areas throughout the brain was determined to be qualitatively similar to that previously demonstrated in normal hearing animals, but with twice as many projections arising from non-auditory cortical areas. In addition, small ectopic projections were observed from a number of fields in visual cortex, including areas 19, 20a, 20b, and 21b, and area 7 of parietal cortex. These areas did not show projections to PAF in cats deafened ototoxically near the onset of hearing, and provide a possible mechanism for crossmodal reorganization of PAF. These, along with the possible contributions of other mechanisms, are considered. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Recruitment of the auditory cortex in congenitally deaf cats by long-term cochlear electrostimulation.

    PubMed

    Klinke, R; Kral, A; Heid, S; Tillein, J; Hartmann, R

    1999-09-10

    In congenitally deaf cats, the central auditory system is deprived of acoustic input because of degeneration of the organ of Corti before the onset of hearing. Primary auditory afferents survive and can be stimulated electrically. By means of an intracochlear implant and an accompanying sound processor, congenitally deaf kittens were exposed to sounds and conditioned to respond to tones. After months of exposure to meaningful stimuli, the cortical activity in chronically implanted cats produced field potentials of higher amplitudes, expanded in area, developed long latency responses indicative of intracortical information processing, and showed more synaptic efficacy than in naïve, unstimulated deaf cats. The activity established by auditory experience resembles activity in hearing animals.

  2. Discriminating between auditory and motor cortical responses to speech and non-speech mouth sounds

    PubMed Central

    Agnew, Z.K.; McGettigan, C.; Scott, S.K.

    2012-01-01

    Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds, or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and non-speech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: anterior temporal lobe areas are sensitive to the acoustic/phonetic properties while motor responses may show more generalised responses to the acoustic stimuli. PMID:21812557

  3. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS)

    PubMed Central

    Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom sound perception and potentially serve as an objective measure of central neural pathology. PMID:28604786

  4. Differential processing of melodic, rhythmic and simple tone deviations in musicians--an MEG study.

    PubMed

    Lappe, Claudia; Lappe, Markus; Pantev, Christo

    2016-01-01

    Rhythm and melody are two basic characteristics of music. Performing musicians have to pay attention to both, and avoid errors in either aspect of their performance. To investigate the neural processes involved in detecting melodic and rhythmic errors from auditory input we tested musicians on both kinds of deviations in a mismatch negativity (MMN) design. We found that MMN responses to a rhythmic deviation occurred at shorter latencies than MMN responses to a melodic deviation. Beamformer source analysis showed that the melodic deviation activated superior temporal, inferior frontal and superior frontal areas whereas the activation pattern of the rhythmic deviation focused more strongly on inferior and superior parietal areas, in addition to superior temporal cortex. Activation in the supplementary motor area occurred for both types of deviations. We also recorded responses to similar pitch and tempo deviations in a simple, non-musical repetitive tone pattern. In this case, there was no latency difference between the MMNs and cortical activation was smaller and mostly limited to auditory cortex. The results suggest that prediction and error detection of musical stimuli in trained musicians involve a broad cortical network and that rhythmic and melodic errors are processed in partially different cortical streams. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.

    PubMed

    Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O

    2008-11-11

    Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.

  6. Cortical auditory evoked potentials in the assessment of auditory neuropathy: two case studies.

    PubMed

    Pearce, Wendy; Golding, Maryanne; Dillon, Harvey

    2007-05-01

    Infants with auditory neuropathy and possible hearing impairment are being identified at very young ages through the implementation of hearing screening programs. The diagnosis is commonly based on evidence of normal cochlear function but abnormal brainstem function. This lack of normal brainstem function is highly problematic when prescribing amplification in young infants because prescriptive formulae require the input of hearing thresholds that are normally estimated from auditory brainstem responses to tonal stimuli. Without this information, there is great uncertainty surrounding the final fitting. Cortical auditory evoked potentials may, however, still be evident and reliably recorded to speech stimuli presented at conversational levels. The case studies of two infants are presented that demonstrate how these higher order electrophysiological responses may be utilized in the audiological management of some infants with auditory neuropathy.

  7. Single electrode micro-stimulation of rat auditory cortex: an evaluation of behavioral performance.

    PubMed

    Rousche, Patrick J; Otto, Kevin J; Reilly, Mark P; Kipke, Daryl R

    2003-05-01

    A combination of electrophysiological mapping, behavioral analysis and cortical micro-stimulation was used to explore the interrelation between the auditory cortex and behavior in the adult rat. Auditory discriminations were evaluated in eight rats trained to discriminate the presence or absence of a 75 dB pure tone stimulus. A probe trial technique was used to obtain intensity generalization gradients that described response probabilities to mid-level tones between 0 and 75 dB. The same rats were then chronically implanted in the auditory cortex with a 16 or 32 channel tungsten microwire electrode array. Implanted animals were then trained to discriminate the presence of single electrode micro-stimulation of magnitude 90 microA (22.5 nC/phase). Intensity generalization gradients were created to obtain the response probabilities to mid-level current magnitudes ranging from 0 to 90 microA on 36 different electrodes in six of the eight rats. The 50% point (the current level resulting in 50% detections) varied from 16.7 to 69.2 microA, with an overall mean of 42.4 (+/-8.1) microA across all single electrodes. Cortical micro-stimulation induced sensory-evoked behavior with similar characteristics as normal auditory stimuli. The results highlight the importance of the auditory cortex in a discrimination task and suggest that micro-stimulation of the auditory cortex might be an effective means for a graded information transfer of auditory information directly to the brain as part of a cortical auditory prosthesis.

  8. Bilingualism increases neural response consistency and attentional control: Evidence for sensory and cognitive coupling

    PubMed Central

    Krizman, Jennifer; Skoe, Erika; Marian, Viorica; Kraus, Nina

    2014-01-01

    Auditory processing is presumed to be influenced by cognitive processes – including attentional control – in a top-down manner. In bilinguals, activation of both languages during daily communication hones inhibitory skills, which subsequently bolster attentional control. We hypothesize that the heightened attentional demands of bilingual communication strengthens connections between cognitive (i.e., attentional control) and auditory processing, leading to greater across-trial consistency in the auditory evoked response (i.e., neural consistency) in bilinguals. To assess this, we collected passively-elicited auditory evoked responses to the syllable [da] and separately obtained measures of attentional control and language ability in adolescent Spanish-English bilinguals and English monolinguals. Bilinguals demonstrated enhanced attentional control and more consistent brainstem and cortical responses. In bilinguals, but not monolinguals, brainstem consistency tracked with language proficiency and attentional control. We interpret these enhancements in neural consistency as the outcome of strengthened attentional control that emerged from experience communicating in two languages. PMID:24413593

  9. Brainstem Transcription of Speech Is Disrupted in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Russo, Nicole; Nicol, Trent; Trommer, Barbara; Zecker, Steve; Kraus, Nina

    2009-01-01

    Language impairment is a hallmark of autism spectrum disorders (ASD). The origin of the deficit is poorly understood although deficiencies in auditory processing have been detected in both perception and cortical encoding of speech sounds. Little is known about the processing and transcription of speech sounds at earlier (brainstem) levels or…

  10. Neurophysiological mechanisms of cortical plasticity impairments in schizophrenia and modulation by the NMDA receptor agonist D-serine.

    PubMed

    Kantrowitz, Joshua T; Epstein, Michael L; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M; Revheim, Nadine; Lehrfeld, Nayla P; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C

    2016-12-01

    Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time-frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908 Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Effects of an NMDA antagonist on the auditory mismatch negativity response to transcranial direct current stimulation.

    PubMed

    Impey, Danielle; de la Salle, Sara; Baddeley, Ashley; Knott, Verner

    2017-05-01

    Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a weak constant current to alter cortical excitability and activity temporarily. tDCS-induced increases in neuronal excitability and performance improvements have been observed following anodal stimulation of brain regions associated with visual and motor functions, but relatively little research has been conducted with respect to auditory processing. Recently, pilot study results indicate that anodal tDCS can increase auditory deviance detection, whereas cathodal tDCS decreases auditory processing, as measured by a brain-based event-related potential (ERP), mismatch negativity (MMN). As evidence has shown that tDCS lasting effects may be dependent on N-methyl-D-aspartate (NMDA) receptor activity, the current study investigated the use of dextromethorphan (DMO), an NMDA antagonist, to assess possible modulation of tDCS's effects on both MMN and working memory performance. The study, conducted in 12 healthy volunteers, involved four laboratory test sessions within a randomised, placebo and sham-controlled crossover design that compared pre- and post-anodal tDCS over the auditory cortex (2 mA for 20 minutes to excite cortical activity temporarily and locally) and sham stimulation (i.e. device is turned off) during both DMO (50 mL) and placebo administration. Anodal tDCS increased MMN amplitudes with placebo administration. Significant increases were not seen with sham stimulation or with anodal stimulation during DMO administration. With sham stimulation (i.e. no stimulation), DMO decreased MMN amplitudes. Findings from this study contribute to the understanding of underlying neurobiological mechanisms mediating tDCS sensory and memory improvements.

  12. Auditory Temporal Acuity Probed With Cochlear Implant Stimulation and Cortical Recording

    PubMed Central

    Kirby, Alana E.

    2010-01-01

    Cochlear implants stimulate the auditory nerve with amplitude-modulated (AM) electric pulse trains. Pulse rates >2,000 pulses per second (pps) have been hypothesized to enhance transmission of temporal information. Recent studies, however, have shown that higher pulse rates impair phase locking to sinusoidal AM in the auditory cortex and impair perceptual modulation detection. Here, we investigated the effects of high pulse rates on the temporal acuity of transmission of pulse trains to the auditory cortex. In anesthetized guinea pigs, signal-detection analysis was used to measure the thresholds for detection of gaps in pulse trains at rates of 254, 1,017, and 4,069 pps and in acoustic noise. Gap-detection thresholds decreased by an order of magnitude with increases in pulse rate from 254 to 4,069 pps. Such a pulse-rate dependence would likely influence speech reception through clinical speech processors. To elucidate the neural mechanisms of gap detection, we measured recovery from forward masking after a 196.6-ms pulse train. Recovery from masking was faster at higher carrier pulse rates and masking increased linearly with current level. We fit the data with a dual-exponential recovery function, consistent with a peripheral and a more central process. High-rate pulse trains evoked less central masking, possibly due to adaptation of the response in the auditory nerve. Neither gap detection nor forward masking varied with cortical depth, indicating that these processes are likely subcortical. These results indicate that gap detection and modulation detection are mediated by two separate neural mechanisms. PMID:19923242

  13. Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction.

    PubMed

    Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker

    2016-01-01

    Whether cognitive load-and other aspects of task difficulty-increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information-which decreases distractibility-as a side effect of the increased activity in a focused-attention network.

  14. Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction

    PubMed Central

    Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker

    2016-01-01

    Whether cognitive load—and other aspects of task difficulty—increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information—which decreases distractibility—as a side effect of the increased activity in a focused-attention network. PMID:27242485

  15. Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution.

    PubMed

    Berlot, Eva; Formisano, Elia; De Martino, Federico

    2018-05-23

    Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels. SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits. Copyright © 2018 the authors 0270-6474/18/384934-09$15.00/0.

  16. Auditory cortical volumes and musical ability in Williams syndrome.

    PubMed

    Martens, Marilee A; Reutens, David C; Wilson, Sarah J

    2010-07-01

    Individuals with Williams syndrome (WS) have been shown to have atypical morphology in the auditory cortex, an area associated with aspects of musicality. Some individuals with WS have demonstrated specific musical abilities, despite intellectual delays. Primary auditory cortex and planum temporale volumes were manually segmented in 25 individuals with WS and 25 control participants, and the participants also underwent testing of musical abilities. Left and right planum temporale volumes were significantly larger in the participants with WS than in controls, with no significant difference noted between groups in planum temporale asymmetry or primary auditory cortical volumes. Left planum temporale volume was significantly increased in a subgroup of the participants with WS who demonstrated specific musical strengths, as compared to the remaining WS participants, and was highly correlated with scores on a musical task. These findings suggest that differences in musical ability within WS may be in part associated with variability in the left auditory cortical region, providing further evidence of cognitive and neuroanatomical heterogeneity within this syndrome. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  17. Attentional Gain Control of Ongoing Cortical Speech Representations in a “Cocktail Party”

    PubMed Central

    Kerlin, Jess R.; Shahin, Antoine J.; Miller, Lee M.

    2010-01-01

    Normal listeners possess the remarkable perceptual ability to select a single speech stream among many competing talkers. However, few studies of selective attention have addressed the unique nature of speech as a temporally extended and complex auditory object. We hypothesized that sustained selective attention to speech in a multi-talker environment would act as gain control on the early auditory cortical representations of speech. Using high-density electroencephalography and a template-matching analysis method, we found selective gain to the continuous speech content of an attended talker, greatest at a frequency of 4–8 Hz, in auditory cortex. In addition, the difference in alpha power (8–12 Hz) at parietal sites across hemispheres indicated the direction of auditory attention to speech, as has been previously found in visual tasks. The strength of this hemispheric alpha lateralization, in turn, predicted an individual’s attentional gain of the cortical speech signal. These results support a model of spatial speech stream segregation, mediated by a supramodal attention mechanism, enabling selection of the attended representation in auditory cortex. PMID:20071526

  18. Effects of location and timing of co-activated neurons in the auditory midbrain on cortical activity: implications for a new central auditory prosthesis

    NASA Astrophysics Data System (ADS)

    Straka, Małgorzata M.; McMahon, Melissa; Markovitz, Craig D.; Lim, Hubert H.

    2014-08-01

    Objective. An increasing number of deaf individuals are being implanted with central auditory prostheses, but their performance has generally been poorer than for cochlear implant users. The goal of this study is to investigate stimulation strategies for improving hearing performance with a new auditory midbrain implant (AMI). Previous studies have shown that repeated electrical stimulation of a single site in each isofrequency lamina of the central nucleus of the inferior colliculus (ICC) causes strong suppressive effects in elicited responses within the primary auditory cortex (A1). Here we investigate if improved cortical activity can be achieved by co-activating neurons with different timing and locations across an ICC lamina and if this cortical activity varies across A1. Approach. We electrically stimulated two sites at different locations across an isofrequency ICC lamina using varying delays in ketamine-anesthetized guinea pigs. We recorded and analyzed spike activity and local field potentials across different layers and locations of A1. Results. Co-activating two sites within an isofrequency lamina with short inter-pulse intervals (<5 ms) could elicit cortical activity that is enhanced beyond a linear summation of activity elicited by the individual sites. A significantly greater extent of normalized cortical activity was observed for stimulation of the rostral-lateral region of an ICC lamina compared to the caudal-medial region. We did not identify any location trends across A1, but the most cortical enhancement was observed in supragranular layers, suggesting further integration of the stimuli through the cortical layers. Significance. The topographic organization identified by this study provides further evidence for the presence of functional zones across an ICC lamina with locations consistent with those identified by previous studies. Clinically, these results suggest that co-activating different neural populations in the rostral-lateral ICC rather than the caudal-medial ICC using the AMI may improve or elicit different types of hearing capabilities.

  19. A Novel Functional Magnetic Resonance Imaging Paradigm for the Preoperative Assessment of Auditory Perception in a Musician Undergoing Temporal Lobe Surgery.

    PubMed

    Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J

    2018-03-01

    Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. McGurk illusion recalibrates subsequent auditory perception

    PubMed Central

    Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  1. On cortical coding of vocal communication sounds in primates

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqin

    2000-10-01

    Understanding how the brain processes vocal communication sounds is one of the most challenging problems in neuroscience. Our understanding of how the cortex accomplishes this unique task should greatly facilitate our understanding of cortical mechanisms in general. Perception of species-specific communication sounds is an important aspect of the auditory behavior of many animal species and is crucial for their social interactions, reproductive success, and survival. The principles of neural representations of these behaviorally important sounds in the cerebral cortex have direct implications for the neural mechanisms underlying human speech perception. Our progress in this area has been relatively slow, compared with our understanding of other auditory functions such as echolocation and sound localization. This article discusses previous and current studies in this field, with emphasis on nonhuman primates, and proposes a conceptual platform to further our exploration of this frontier. It is argued that the prerequisite condition for understanding cortical mechanisms underlying communication sound perception and production is an appropriate animal model. Three issues are central to this work: (i) neural encoding of statistical structure of communication sounds, (ii) the role of behavioral relevance in shaping cortical representations, and (iii) sensory-motor interactions between vocal production and perception systems.

  2. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    PubMed Central

    Kotak, Vibhakar C.; Mowery, Todd M.; Sanes, Dan H.

    2015-01-01

    The representation of acoustic cues involves regions downstream from the auditory cortex (ACx). One such area, the perirhinal cortex (PRh), processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG) and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of type A gamma-aminobutyric acid (GABA-A) receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the ACx. PMID:26321918

  3. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822

  4. Assessment of hearing threshold in adults with hearing loss using an automated system of cortical auditory evoked potential detection.

    PubMed

    Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia

    The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the behavioral for the group with hearing loss and, on average, 14.5dB higher for the group without hearing loss for all studied frequencies. The cortical electrophysiological thresholds obtained with the use of an automated response detection system were highly correlated with behavioral thresholds in the group of individuals with hearing loss. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  5. Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates.

    PubMed

    Liu, Ying; Fan, Hao; Li, Jingting; Jones, Jeffery A; Liu, Peng; Zhang, Baofeng; Liu, Hanjun

    2018-01-01

    When people hear unexpected perturbations in auditory feedback, they produce rapid compensatory adjustments of their vocal behavior. Recent evidence has shown enhanced vocal compensations and cortical event-related potentials (ERPs) in response to attended pitch feedback perturbations, suggesting that this reflex-like behavior is influenced by selective attention. Less is known, however, about auditory-motor integration for voice control during divided attention. The present cross-modal study investigated the behavioral and ERP correlates of auditory feedback control of vocal pitch production during divided attention. During the production of sustained vowels, 32 young adults were instructed to simultaneously attend to both pitch feedback perturbations they heard and flashing red lights they saw. The presentation rate of the visual stimuli was varied to produce a low, intermediate, and high attentional load. The behavioral results showed that the low-load condition elicited significantly smaller vocal compensations for pitch perturbations than the intermediate-load and high-load conditions. As well, the cortical processing of vocal pitch feedback was also modulated as a function of divided attention. When compared to the low-load and intermediate-load conditions, the high-load condition elicited significantly larger N1 responses and smaller P2 responses to pitch perturbations. These findings provide the first neurobehavioral evidence that divided attention can modulate auditory feedback control of vocal pitch production.

  6. Unreliable evoked responses in autism

    PubMed Central

    Dinstein, Ilan; Heeger, David J.; Lorenzi, Lauren; Minshew, Nancy J.; Malach, Rafael; Behrmann, Marlene

    2012-01-01

    Summary Autism has been described as a disorder of general neural processing, but the particular processing characteristics that might be abnormal in autism have mostly remained obscure. Here, we present evidence of one such characteristic: poor evoked response reliability. We compared cortical response amplitude and reliability (consistency across trials) in visual, auditory, and somatosensory cortices of high-functioning individuals with autism and controls. Mean response amplitudes were statistically indistinguishable across groups, yet trial-by-trial response reliability was significantly weaker in autism, yielding smaller signal-to-noise ratios in all sensory systems. Response reliability differences were evident only in evoked cortical responses and not in ongoing resting-state activity. These findings reveal that abnormally unreliable cortical responses, even to elementary non-social sensory stimuli, may represent a fundamental physiological alteration of neural processing in autism. The results motivate a critical expansion of autism research to determine whether (and how) basic neural processing properties such as reliability, plasticity, and adaptation/habituation are altered in autism. PMID:22998867

  7. Auditory and audio-vocal responses of single neurons in the monkey ventral premotor cortex.

    PubMed

    Hage, Steffen R

    2018-03-20

    Monkey vocalization is a complex behavioral pattern, which is flexibly used in audio-vocal communication. A recently proposed dual neural network model suggests that cognitive control might be involved in this behavior, originating from a frontal cortical network in the prefrontal cortex and mediated via projections from the rostral portion of the ventral premotor cortex (PMvr) and motor cortex to the primary vocal motor network in the brainstem. For the rapid adjustment of vocal output to external acoustic events, strong interconnections between vocal motor and auditory sites are needed, which are present at cortical and subcortical levels. However, the role of the PMvr in audio-vocal integration processes remains unclear. In the present study, single neurons in the PMvr were recorded in rhesus monkeys (Macaca mulatta) while volitionally producing vocalizations in a visual detection task or passively listening to monkey vocalizations. Ten percent of randomly selected neurons in the PMvr modulated their discharge rate in response to acoustic stimulation with species-specific calls. More than four-fifths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of the vocalization. Based on these audio-vocal interactions, the PMvr might be well positioned to mediate higher order auditory processing with cognitive control of the vocal motor output to the primary vocal motor network. Such audio-vocal integration processes in the premotor cortex might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    PubMed

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  10. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation

    PubMed Central

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J.; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M.; Lenarz, Thomas; Lim, Hubert H.

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus. PMID:26046763

  11. Brainstem transcription of speech is disrupted in children with autism spectrum disorders

    PubMed Central

    Russo, Nicole; Nicol, Trent; Trommer, Barbara; Zecker, Steve; Kraus, Nina

    2009-01-01

    Language impairment is a hallmark of autism spectrum disorders (ASD). The origin of the deficit is poorly understood although deficiencies in auditory processing have been detected in both perception and cortical encoding of speech sounds. Little is known about the processing and transcription of speech sounds at earlier (brainstem) levels or about how background noise may impact this transcription process. Unlike cortical encoding of sounds, brainstem representation preserves stimulus features with a degree of fidelity that enables a direct link between acoustic components of the speech syllable (e.g., onsets) to specific aspects of neural encoding (e.g., waves V and A). We measured brainstem responses to the syllable /da/, in quiet and background noise, in children with and without ASD. Children with ASD exhibited deficits in both the neural synchrony (timing) and phase locking (frequency encoding) of speech sounds, despite normal click-evoked brainstem responses. They also exhibited reduced magnitude and fidelity of speech-evoked responses and inordinate degradation of responses by background noise in comparison to typically developing controls. Neural synchrony in noise was significantly related to measures of core and receptive language ability. These data support the idea that abnormalities in the brainstem processing of speech contribute to the language impairment in ASD. Because it is both passively-elicited and malleable, the speech-evoked brainstem response may serve as a clinical tool to assess auditory processing as well as the effects of auditory training in the ASD population. PMID:19635083

  12. Present and past: Can writing abilities in school children be associated with their auditory discrimination capacities in infancy?

    PubMed

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D

    2015-12-01

    Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Different Types of Laughter Modulate Connectivity within Distinct Parts of the Laughter Perception Network

    PubMed Central

    Ethofer, Thomas; Brück, Carolin; Alter, Kai; Grodd, Wolfgang; Kreifelts, Benjamin

    2013-01-01

    Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter. PMID:23667619

  14. Different types of laughter modulate connectivity within distinct parts of the laughter perception network.

    PubMed

    Wildgruber, Dirk; Szameitat, Diana P; Ethofer, Thomas; Brück, Carolin; Alter, Kai; Grodd, Wolfgang; Kreifelts, Benjamin

    2013-01-01

    Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter.

  15. A train of electrical pulses applied to the primary auditory cortex evokes a conditioned response in guinea pigs.

    PubMed

    Okuda, Yuji; Shikata, Hiroshi; Song, Wen-Jie

    2011-09-01

    As a step to develop auditory prosthesis by cortical stimulation, we tested whether a single train of pulses applied to the primary auditory cortex could elicit classically conditioned behavior in guinea pigs. Animals were trained using a tone as the conditioned stimulus and an electrical shock to the right eyelid as the unconditioned stimulus. After conditioning, a train of 11 pulses applied to the left AI induced the conditioned eye-blink response. Cortical stimulation induced no response after extinction. Our results support the feasibility of auditory prosthesis by electrical stimulation of the cortex. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  16. Pleasurable Emotional Response to Music: A Case of Neurodegenerative Generalized Auditory Agnosia

    PubMed Central

    Matthews, Brandy R.; Chang, Chiung-Chih; De May, Mary; Engstrom, John; Miller, Bruce L.

    2009-01-01

    Recent functional neuroimaging studies implicate the network of mesolimbic structures known to be active in reward processing as the neural substrate of pleasure associated with listening to music. Psychoacoustic and lesion studies suggest that there is a widely distributed cortical network involved in processing discreet musical variables. Here we present the case of a young man with auditory agnosia as the consequence of cortical neurodegeneration who continues to experience pleasure when exposed to music. In a series of musical tasks the subject was unable to accurately identify any of the perceptual components of music beyond simple pitch discrimination, including musical variables know to impact the perception of affect. The subject subsequently misidentified the musical character of personally familiar tunes presented experimentally, but continued to report the activity of “listening” to specific musical genres was an emotionally rewarding experience. The implications of this case for the evolving understanding of music perception, music misperception, music memory, and music-associated emotion are discussed. PMID:19253088

  17. Pleasurable emotional response to music: a case of neurodegenerative generalized auditory agnosia.

    PubMed

    Matthews, Brandy R; Chang, Chiung-Chih; De May, Mary; Engstrom, John; Miller, Bruce L

    2009-06-01

    Recent functional neuroimaging studies implicate the network of mesolimbic structures known to be active in reward processing as the neural substrate of pleasure associated with listening to music. Psychoacoustic and lesion studies suggest that there is a widely distributed cortical network involved in processing discreet musical variables. Here we present the case of a young man with auditory agnosia as the consequence of cortical neurodegeneration who continues to experience pleasure when exposed to music. In a series of musical tasks, the subject was unable to accurately identify any of the perceptual components of music beyond simple pitch discrimination, including musical variables known to impact the perception of affect. The subject subsequently misidentified the musical character of personally familiar tunes presented experimentally, but continued to report that the activity of 'listening' to specific musical genres was an emotionally rewarding experience. The implications of this case for the evolving understanding of music perception, music misperception, music memory, and music-associated emotion are discussed.

  18. Auditory temporal processing in healthy aging: a magnetoencephalographic study

    PubMed Central

    Sörös, Peter; Teismann, Inga K; Manemann, Elisabeth; Lütkenhöner, Bernd

    2009-01-01

    Background Impaired speech perception is one of the major sequelae of aging. In addition to peripheral hearing loss, central deficits of auditory processing are supposed to contribute to the deterioration of speech perception in older individuals. To test the hypothesis that auditory temporal processing is compromised in aging, auditory evoked magnetic fields were recorded during stimulation with sequences of 4 rapidly recurring speech sounds in 28 healthy individuals aged 20 – 78 years. Results The decrement of the N1m amplitude during rapid auditory stimulation was not significantly different between older and younger adults. The amplitudes of the middle-latency P1m wave and of the long-latency N1m, however, were significantly larger in older than in younger participants. Conclusion The results of the present study do not provide evidence for the hypothesis that auditory temporal processing, as measured by the decrement (short-term habituation) of the major auditory evoked component, the N1m wave, is impaired in aging. The differences between these magnetoencephalographic findings and previously published behavioral data might be explained by differences in the experimental setting between the present study and previous behavioral studies, in terms of speech rate, attention, and masking noise. Significantly larger amplitudes of the P1m and N1m waves suggest that the cortical processing of individual sounds differs between younger and older individuals. This result adds to the growing evidence that brain functions, such as sensory processing, motor control and cognitive processing, can change during healthy aging, presumably due to experience-dependent neuroplastic mechanisms. PMID:19351410

  19. Hearing after congenital deafness: central auditory plasticity and sensory deprivation.

    PubMed

    Kral, A; Hartmann, R; Tillein, J; Heid, S; Klinke, R

    2002-08-01

    The congenitally deaf cat suffers from a degeneration of the inner ear. The organ of Corti bears no hair cells, yet the auditory afferents are preserved. Since these animals have no auditory experience, they were used as a model for congenital deafness. Kittens were equipped with a cochlear implant at different ages and electro-stimulated over a period of 2.0-5.5 months using a monopolar single-channel compressed analogue stimulation strategy (VIENNA-type signal processor). Following a period of auditory experience, we investigated cortical field potentials in response to electrical biphasic pulses applied by means of the cochlear implant. In comparison to naive unstimulated deaf cats and normal hearing cats, the chronically stimulated animals showed larger cortical regions producing middle-latency responses at or above 300 microV amplitude at the contralateral as well as the ipsilateral auditory cortex. The cortex ipsilateral to the chronically stimulated ear did not show any signs of reduced responsiveness when stimulating the 'untrained' ear through a second cochlear implant inserted in the final experiment. With comparable duration of auditory training, the activated cortical area was substantially smaller if implantation had been performed at an older age of 5-6 months. The data emphasize that young sensory systems in cats have a higher capacity for plasticity than older ones and that there is a sensitive period for the cat's auditory system.

  20. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS)

    PubMed Central

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception. PMID:27042360

  1. Click train encoding in primary and non-primary auditory cortex of anesthetized macaque monkeys.

    PubMed

    Oshurkova, E; Scheich, H; Brosch, M

    2008-06-02

    We studied encoding of temporally modulated sounds in 28 multiunits in the primary auditory cortical field (AI) and in 35 multiunits in the secondary auditory cortical field (caudomedial auditory cortical field, CM) by presenting periodic click trains with click rates between 1 and 300 Hz lasting for 2-4 s. We found that all multiunits increased or decreased their firing rate during the steady state portion of the click train and that all except two multiunits synchronized their firing to individual clicks in the train. Rate increases and synchronized responses were most prevalent and strongest at low click rates, as expressed by best modulation frequency, limiting frequency, percentage of responsive multiunits, and average rate response and vector strength. Synchronized responses occurred up to 100 Hz; rate response occurred up to 300 Hz. Both auditory fields responded similarly to low click rates but differed at click rates above approximately 12 Hz at which more multiunits in AI than in CM exhibited synchronized responses and increased rate responses and more multiunits in CM exhibited decreased rate responses. These findings suggest that the auditory cortex of macaque monkeys encodes temporally modulated sounds similar to the auditory cortex of other mammals. Together with other observations presented in this and other reports, our findings also suggest that AI and CM have largely overlapping sensitivities for acoustic stimulus features but encode these features differently.

  2. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  3. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  4. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise

    PubMed Central

    Ioannou, Christos I.; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies. PMID:26065708

  5. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.

    PubMed

    Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.

  6. Sound envelope processing in the developing human brain: A MEG study.

    PubMed

    Tang, Huizhen; Brock, Jon; Johnson, Blake W

    2016-02-01

    This study investigated auditory cortical processing of linguistically-relevant temporal modulations in the developing brains of young children. Auditory envelope following responses to white noise amplitude modulated at rates of 1-80 Hz in healthy children (aged 3-5 years) and adults were recorded using a paediatric magnetoencephalography (MEG) system and a conventional MEG system, respectively. For children, there were envelope following responses to slow modulations but no significant responses to rates higher than about 25 Hz, whereas adults showed significant envelope following responses to almost the entire range of stimulus rates. Our results show that the auditory cortex of preschool-aged children has a sharply limited capacity to process rapid amplitude modulations in sounds, as compared to the auditory cortex of adults. These neurophysiological results are consistent with previous psychophysical evidence for a protracted maturational time course for auditory temporal processing. The findings are also in good agreement with current linguistic theories that posit a perceptual bias for low frequency temporal information in speech during language acquisition. These insights also have clinical relevance for our understanding of language disorders that are associated with difficulties in processing temporal information in speech. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Clinical Applications for EPs in the ICU.

    PubMed

    Koenig, Matthew A; Kaplan, Peter W

    2015-12-01

    In critically ill patients, evoked potential (EP) testing is an important tool for measuring neurologic function, signal transmission, and secondary processing of sensory information in real time. Evoked potential measures conduction along the peripheral and central sensory pathways with longer-latency potentials representing more complex thalamocortical and intracortical processing. In critically ill patients with limited neurologic exams, EP provides a window into brain function and the potential for recovery of consciousness. The most common EP modalities in clinical use in the intensive care unit include somatosensory evoked potentials, brainstem auditory EPs, and cortical event-related potentials. The primary indications for EP in critically ill patients are prognostication in anoxic-ischemic or traumatic coma, monitoring for neurologic improvement or decline, and confirmation of brain death. Somatosensory evoked potentials had become an important prognostic tool for coma recovery, especially in comatose survivors of cardiac arrest. In this population, the bilateral absence of cortical somatosensory evoked potentials has nearly 100% specificity for death or persistent vegetative state. Historically, EP has been regarded as a negative prognostic test, that is, the absence of cortical potentials is associated with poor outcomes while the presence cortical potentials are prognostically indeterminate. In recent studies, the presence of middle-latency and long-latency potentials as well as the amplitude of cortical potentials is more specific for good outcomes. Event-related potentials, particularly mismatch negativity of complex auditory patterns, is emerging as an important positive prognostic test in patients under comatose. Multimodality predictive algorithms that combine somatosensory evoked potentials, event-related potentials, and clinical and radiographic factors are gaining favor for coma prognostication.

  8. Direct recordings from the auditory cortex in a cochlear implant user.

    PubMed

    Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A

    2013-06-01

    Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings.

  9. Dissociable meta-analytic brain networks contribute to coordinated emotional processing.

    PubMed

    Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R

    2018-06-01

    Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.

  10. Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-02-17

    Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.

  11. Auditory connections and functions of prefrontal cortex

    PubMed Central

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  12. Frontotemporal oxyhemoglobin dynamics predict performance accuracy of dance simulation gameplay: temporal characteristics of top-down and bottom-up cortical activities.

    PubMed

    Ono, Yumie; Nomoto, Yasunori; Tanaka, Shohei; Sato, Keisuke; Shimada, Sotaro; Tachibana, Atsumichi; Bronner, Shaw; Noah, J Adam

    2014-01-15

    We utilized the high temporal resolution of functional near-infrared spectroscopy to explore how sensory input (visual and rhythmic auditory cues) are processed in the cortical areas of multimodal integration to achieve coordinated motor output during unrestricted dance simulation gameplay. Using an open source clone of the dance simulation video game, Dance Dance Revolution, two cortical regions of interest were selected for study, the middle temporal gyrus (MTG) and the frontopolar cortex (FPC). We hypothesized that activity in the FPC would indicate top-down regulatory mechanisms of motor behavior; while that in the MTG would be sustained due to bottom-up integration of visual and auditory cues throughout the task. We also hypothesized that a correlation would exist between behavioral performance and the temporal patterns of the hemodynamic responses in these regions of interest. Results indicated that greater temporal accuracy of dance steps positively correlated with persistent activation of the MTG and with cumulative suppression of the FPC. When auditory cues were eliminated from the simulation, modifications in cortical responses were found depending on the gameplay performance. In the MTG, high-performance players showed an increase but low-performance players displayed a decrease in cumulative amount of the oxygenated hemoglobin response in the no music condition compared to that in the music condition. In the FPC, high-performance players showed relatively small variance in the activity regardless of the presence of auditory cues, while low-performance players showed larger differences in the activity between the no music and music conditions. These results suggest that the MTG plays an important role in the successful integration of visual and rhythmic cues and the FPC may work as top-down control to compensate for insufficient integrative ability of visual and rhythmic cues in the MTG. The relative relationships between these cortical areas indicated high- to low-performance levels when performing cued motor tasks. We propose that changes in these relationships can be monitored to gauge performance increases in motor learning and rehabilitation programs. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery.

    PubMed

    Fujioka, Takako; Ross, Bernhard; Trainor, Laurel J

    2015-11-11

    Dancing to music involves synchronized movements, which can be at the basic beat level or higher hierarchical metrical levels, as in a march (groups of two basic beats, one-two-one-two …) or waltz (groups of three basic beats, one-two-three-one-two-three …). Our previous human magnetoencephalography studies revealed that the subjective sense of meter influences auditory evoked responses phase locked to the stimulus. Moreover, the timing of metronome clicks was represented in periodic modulation of induced (non-phase locked) β-band (13-30 Hz) oscillation in bilateral auditory and sensorimotor cortices. Here, we further examine whether acoustically accented and subjectively imagined metric processing in march and waltz contexts during listening to isochronous beats were reflected in neuromagnetic β-band activity recorded from young adult musicians. First, we replicated previous findings of beat-related β-power decrease at 200 ms after the beat followed by a predictive increase toward the onset of the next beat. Second, we showed that the β decrease was significantly influenced by the metrical structure, as reflected by differences across beat type for both perception and imagery conditions. Specifically, the β-power decrease associated with imagined downbeats (the count "one") was larger than that for both the upbeat (preceding the count "one") in the march, and for the middle beat in the waltz. Moreover, beamformer source analysis for the whole brain volume revealed that the metric contrasts involved auditory and sensorimotor cortices; frontal, parietal, and inferior temporal lobes; and cerebellum. We suggest that the observed β-band activities reflect a translation of timing information to auditory-motor coordination. With magnetoencephalography, we examined β-band oscillatory activities around 20 Hz while participants listened to metronome beats and imagined musical meters such as a march and waltz. We demonstrated that β-band event-related desynchronization in the auditory cortex differentiates between beat positions, specifically between downbeats and the following beat. This is the first demonstration of β-band oscillations related to hierarchical and internalized timing information. Moreover, the meter representation in the β oscillations was widespread across the brain, including sensorimotor and premotor cortices, parietal lobe, and cerebellum. The results extend current understanding of the role of β oscillations in neural processing of predictive timing. Copyright © 2015 the authors 0270-6474/15/3515187-12$15.00/0.

  14. Early sensory encoding of affective prosody: neuromagnetic tomography of emotional category changes.

    PubMed

    Thönnessen, Heike; Boers, Frank; Dammers, Jürgen; Chen, Yu-Han; Norra, Christine; Mathiak, Klaus

    2010-03-01

    In verbal communication, prosodic codes may be phylogenetically older than lexical ones. Little is known, however, about early, automatic encoding of emotional prosody. This study investigated the neuromagnetic analogue of mismatch negativity (MMN) as an index of early stimulus processing of emotional prosody using whole-head magnetoencephalography (MEG). We applied two different paradigms to study MMN; in addition to the traditional oddball paradigm, the so-called optimum design was adapted to emotion detection. In a sequence of randomly changing disyllabic pseudo-words produced by one male speaker in neutral intonation, a traditional oddball design with emotional deviants (10% happy and angry each) and an optimum design with emotional (17% happy and sad each) and nonemotional gender deviants (17% female) elicited the mismatch responses. The emotional category changes demonstrated early responses (<200 ms) at both auditory cortices with larger amplitudes at the right hemisphere. Responses to the nonemotional change from male to female voices emerged later ( approximately 300 ms). Source analysis pointed at bilateral auditory cortex sources without robust contribution from other such as frontal sources. Conceivably, both auditory cortices encode categorical representations of emotional prosodic. Processing of cognitive feature extraction and automatic emotion appraisal may overlap at this level enabling rapid attentional shifts to important social cues. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  15. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  16. Using fNIRS to Examine Occipital and Temporal Responses to Stimulus Repetition in Young Infants: Evidence of Selective Frontal Cortex Involvement

    PubMed Central

    Emberson, Lauren L.; Cannon, Grace; Palmeri, Holly; Richards, John E.; Aslin, Richard N.

    2016-01-01

    How does the developing brain respond to recent experience? Repetition suppression (RS) is a robust and well-characterized response of to recent experience found, predominantly, in the perceptual cortices of the adult brain. We use functional near-infrared spectroscopy (fNIRS) to investigate how perceptual (temporal and occipital) and frontal cortices in the infant brain respond to auditory and visual stimulus repetitions (spoken words and faces). In Experiment 1, we find strong evidence of repetition suppression in the frontal cortex but only for auditory stimuli. In perceptual cortices, we find only suggestive evidence of auditory RS in the temporal cortex and no evidence of visual RS in any ROI. In Experiments 2 and 3, we replicate and extend these findings. Overall, we provide the first evidence that infant and adult brains respond differently to stimulus repetition. We suggest that the frontal lobe may support the development of RS in perceptual cortices. PMID:28012401

  17. Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations.

    PubMed

    Shuster, Anastasia; Levy, Dino J

    2018-01-01

    Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing.

  18. Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations

    PubMed Central

    2018-01-01

    Abstract Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing. PMID:29619408

  19. Mismatch Negativity with Visual-only and Audiovisual Speech

    PubMed Central

    Ponton, Curtis W.; Bernstein, Lynne E.; Auer, Edward T.

    2009-01-01

    The functional organization of cortical speech processing is thought to be hierarchical, increasing in complexity and proceeding from primary sensory areas centrifugally. The current study used the mismatch negativity (MMN) obtained with electrophysiology (EEG) to investigate the early latency period of visual speech processing under both visual-only (VO) and audiovisual (AV) conditions. Current density reconstruction (CDR) methods were used to model the cortical MMN generator locations. MMNs were obtained with VO and AV speech stimuli at early latencies (approximately 82-87 ms peak in time waveforms relative to the acoustic onset) and in regions of the right lateral temporal and parietal cortices. Latencies were consistent with bottom-up processing of the visible stimuli. We suggest that a visual pathway extracts phonetic cues from visible speech, and that previously reported effects of AV speech in classical early auditory areas, given later reported latencies, could be attributable to modulatory feedback from visual phonetic processing. PMID:19404730

  20. Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy.

    PubMed

    Pollonini, Luca; Olds, Cristen; Abaya, Homer; Bortfeld, Heather; Beauchamp, Michael S; Oghalai, John S

    2014-03-01

    The primary goal of most cochlear implant procedures is to improve a patient's ability to discriminate speech. To accomplish this, cochlear implants are programmed so as to maximize speech understanding. However, programming a cochlear implant can be an iterative, labor-intensive process that takes place over months. In this study, we sought to determine whether functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging method which is safe to use repeatedly and for extended periods of time, can provide an objective measure of whether a subject is hearing normal speech or distorted speech. We used a 140 channel fNIRS system to measure activation within the auditory cortex in 19 normal hearing subjects while they listed to speech with different levels of intelligibility. Custom software was developed to analyze the data and compute topographic maps from the measured changes in oxyhemoglobin and deoxyhemoglobin concentration. Normal speech reliably evoked the strongest responses within the auditory cortex. Distorted speech produced less region-specific cortical activation. Environmental sounds were used as a control, and they produced the least cortical activation. These data collected using fNIRS are consistent with the fMRI literature and thus demonstrate the feasibility of using this technique to objectively detect differences in cortical responses to speech of different intelligibility. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Cortical Auditory Deafferentation Induces Long-Term Plasticity in the Inferior Colliculus of Adult Rats: Microarray and qPCR Analysis

    PubMed Central

    Clarkson, Cheryl; Herrero-Turrión, M. Javier; Merchán, Miguel A.

    2012-01-01

    The cortico-collicular pathway is a bilateral excitatory projection from the cortex to the inferior colliculus (IC). It is asymmetric and predominantly ipsilateral. Using microarrays and RT-qPCR we analyzed changes in gene expression in the IC after unilateral lesions of the auditory cortex, comparing the ICs ipsi- and contralateral to the lesioned side. At 15 days after surgery there were mainly changes in gene expression in the IC ipsilateral to the lesion. Regulation primarily involved inflammatory cascade genes, suggesting a direct effect of degeneration rather than a neuronal plastic reorganization. Ninety days after the cortical lesion the ipsilateral IC showed a significant up-regulation of genes involved in apoptosis and axonal regeneration combined with a down-regulation of genes involved in neurotransmission, synaptic growth, and gap junction assembly. In contrast, the contralateral IC at 90 days post-lesion showed an up-regulation in genes primarily related to neurotransmission, cell proliferation, and synaptic growth. There was also a down-regulation in autophagy and neuroprotection genes. These findings suggest that the reorganization in the IC after descending pathway deafferentation is a long-term process involving extensive changes in gene expression regulation. Regulated genes are involved in many different neuronal functions, and the number and gene rearrangement profile seems to depend on the density of loss of the auditory cortical inputs. PMID:23233834

  2. Speech training alters consonant and vowel responses in multiple auditory cortex fields

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927

  3. Impaired pitch perception and memory in congenital amusia: the deficit starts in the auditory cortex.

    PubMed

    Albouy, Philippe; Mattout, Jérémie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaëtan; Aguera, Pierre-Emmanuel; Daligault, Sébastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-05-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a melodic contour task and an easier transposition task); they had to indicate whether sequences of six tones (presented in pairs) were the same or different. Behavioural data indicated that in comparison with control participants, amusics' short-term memory was impaired for the melodic contour task, but not for the transposition task. The major finding was that pitch processing and short-term memory deficits can be traced down to amusics' early brain responses during encoding of the melodic information. Temporal and frontal generators of the N100m evoked by each note of the melody were abnormally recruited in the amusic brain. Dynamic causal modelling of the N100m further revealed decreased intrinsic connectivity in both auditory cortices, increased lateral connectivity between auditory cortices as well as a decreased right fronto-temporal backward connectivity in amusics relative to control subjects. Abnormal functioning of this fronto-temporal network was also shown during the retention interval and the retrieval of melodic information. In particular, induced gamma oscillations in right frontal areas were decreased in amusics during the retention interval. Using voxel-based morphometry, we confirmed morphological brain anomalies in terms of white and grey matter concentration in the right inferior frontal gyrus and the right superior temporal gyrus in the amusic brain. The convergence between functional and structural brain differences strengthens the hypothesis of abnormalities in the fronto-temporal pathway of the amusic brain. Our data provide first evidence of altered functioning of the auditory cortices during pitch perception and memory in congenital amusia. They further support the hypothesis that in neurodevelopmental disorders impacting high-level functions (here musical abilities), abnormalities in cerebral processing can be observed in early brain responses.

  4. Binaural fusion and the representation of virtual pitch in the human auditory cortex.

    PubMed

    Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E

    1996-10-01

    The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.

  5. Seeing voices: High-density electrical mapping and source-analysis of the multisensory mismatch negativity evoked during the McGurk illusion.

    PubMed

    Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J

    2007-02-01

    Seeing a speaker's facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the "McGurk illusion", where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at approximately 290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350-400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process.

  6. Seeing voices: High-density electrical mapping and source-analysis of the multisensory mismatch negativity evoked during the McGurk illusion

    PubMed Central

    Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J.

    2006-01-01

    Seeing a speaker’s facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the “McGurk illusion”, where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at ~290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350–400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process. PMID:16757004

  7. Reduced temporal processing in older, normal-hearing listeners evident from electrophysiological responses to shifts in interaural time difference.

    PubMed

    Ozmeral, Erol J; Eddins, David A; Eddins, Ann C

    2016-12-01

    Previous electrophysiological studies of interaural time difference (ITD) processing have demonstrated that ITDs are represented by a nontopographic population rate code. Rather than narrow tuning to ITDs, neural channels have broad tuning to ITDs in either the left or right auditory hemifield, and the relative activity between the channels determines the perceived lateralization of the sound. With advancing age, spatial perception weakens and poor temporal processing contributes to declining spatial acuity. At present, it is unclear whether age-related temporal processing deficits are due to poor inhibitory controls in the auditory system or degraded neural synchrony at the periphery. Cortical processing of spatial cues based on a hemifield code are susceptible to potential age-related physiological changes. We consider two distinct predictions of age-related changes to ITD sensitivity: declines in inhibitory mechanisms would lead to increased excitation and medial shifts to rate-azimuth functions, whereas a general reduction in neural synchrony would lead to reduced excitation and shallower slopes in the rate-azimuth function. The current study tested these possibilities by measuring an evoked response to ITD shifts in a narrow-band noise. Results were more in line with the latter outcome, both from measured latencies and amplitudes of the global field potentials and source-localized waveforms in the left and right auditory cortices. The measured responses for older listeners also tended to have reduced asymmetric distribution of activity in response to ITD shifts, which is consistent with other sensory and cognitive processing models of aging. Copyright © 2016 the American Physiological Society.

  8. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    NASA Technical Reports Server (NTRS)

    Noohi, F.; Kinnaird, C.; Wood, S.; Bloomberg, J.; Mulavara, A.; Seidler, R.

    2016-01-01

    The current study characterizes brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit either the vestibulo-spinal reflex (saccular-mediated colic Vestibular Evoked Myogenic Potentials (cVEMP)), or the ocular muscle response (utricle-mediated ocular VEMP (oVEMP)). Some researchers have reported that air-conducted skull tap elicits both saccular and utricle-mediated VEMPs, while being faster and less irritating for the subjects. However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying otolith-specific deficits, including gait and balance problems that astronauts experience upon returning to earth. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation. Here we hypothesized that skull taps elicit similar patterns of cortical activity as the auditory tone bursts, and previous vestibular imaging studies. Subjects wore bilateral MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in the supine position, with eyes closed. Subjects received both forms of the stimulation in a counterbalanced fashion. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular system, resulting in the vestibular cortical response. Auditory tone bursts were also delivered for comparison. To validate our stimulation method, we measured the ocular VEMP outside of the scanner. This measurement showed that both skull tap and auditory tone burst elicited vestibular evoked myogenic potentials, indicated by eye muscle responses. We further assessed subjects' postural control and its correlation with vestibular cortical activity. Our results provide the first evidence of using skull taps to elicit vestibular activity inside the MRI scanner. By conducting conjunction analyses we showed that skull taps elicit the same activation pattern as auditory tone bursts (superior temporal gyrus), and both modes of stimulation activate previously identified vestibular cortical regions. Additionally, we found that skull taps elicit more robust vestibular activity compared to auditory tone bursts, with less reported aversive effects. This further supports that the skull tap could replace auditory tone burst stimulation in clinical interventions and basic science research. Moreover, we observed that greater vestibular activation is associated with better balance control. We showed that not only the quality of balance (indicated by the amount of body sway) but also the ability to maintain balance for a longer time (indicated by the balance time) was associated with individuals' vestibular cortical excitability. Our findings support an association between vestibular cortical activity and individual differences in balance. In sum, we found that the skull tap stimulation results in activation of canonical vestibular cortex, suggesting an equally valid, but more tolerable stimulation method compared to auditory tone bursts. This is of high importance in longitudinal vestibular assessments, in which minimizing aversive effects may contribute to higher protocol adherence.

  9. A possible role for a paralemniscal auditory pathway in the coding of slow temporal information

    PubMed Central

    Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina

    2010-01-01

    Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680

  10. Simulating single word processing in the classic aphasia syndromes based on the Wernicke-Lichtheim-Geschwind theory.

    PubMed

    Weems, Scott A; Reggia, James A

    2006-09-01

    The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG theory can provide a deeper understanding of which of its features are plausible and where the theory fails. As a first step in this direction, we created a model of the interconnected left and right neocortical areas that are most relevant to the WLG theory, and used it to study visual-confrontation naming, auditory repetition, and auditory comprehension performance. No specific functionality is assigned a priori to model cortical regions, other than that implicitly present due to their locations in the cortical network and a higher learning rate in left hemisphere regions. Following learning, the model successfully simulates confrontation naming and word repetition, and acquires a unique internal representation in parietal regions for each named object. Simulated lesions to the language-dominant cortical regions produce patterns of single word processing impairment reminiscent of those postulated historically in the classic aphasia syndromes. These results indicate that WLG theory, instantiated as a simple interconnected network of model neocortical regions familiar to any neuropsychologist/neurologist, captures several fundamental "low-level" aspects of neurobiological word processing and their impairment in aphasia.

  11. Cortical plasticity as a mechanism for storing Bayesian priors in sensory perception.

    PubMed

    Köver, Hania; Bao, Shaowen

    2010-05-05

    Human perception of ambiguous sensory signals is biased by prior experiences. It is not known how such prior information is encoded, retrieved and combined with sensory information by neurons. Previous authors have suggested dynamic encoding mechanisms for prior information, whereby top-down modulation of firing patterns on a trial-by-trial basis creates short-term representations of priors. Although such a mechanism may well account for perceptual bias arising in the short-term, it does not account for the often irreversible and robust changes in perception that result from long-term, developmental experience. Based on the finding that more frequently experienced stimuli gain greater representations in sensory cortices during development, we reasoned that prior information could be stored in the size of cortical sensory representations. For the case of auditory perception, we use a computational model to show that prior information about sound frequency distributions may be stored in the size of primary auditory cortex frequency representations, read-out by elevated baseline activity in all neurons and combined with sensory-evoked activity to generate a perception that conforms to Bayesian integration theory. Our results suggest an alternative neural mechanism for experience-induced long-term perceptual bias in the context of auditory perception. They make the testable prediction that the extent of such perceptual prior bias is modulated by both the degree of cortical reorganization and the magnitude of spontaneous activity in primary auditory cortex. Given that cortical over-representation of frequently experienced stimuli, as well as perceptual bias towards such stimuli is a common phenomenon across sensory modalities, our model may generalize to sensory perception, rather than being specific to auditory perception.

  12. Degraded speech sound processing in a rat model of fragile X syndrome

    PubMed Central

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Kilgard, Michael P.

    2014-01-01

    Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies. PMID:24713347

  13. Mood Modulates Auditory Laterality of Hemodynamic Mismatch Responses during Dichotic Listening

    PubMed Central

    Schock, Lisa; Dyck, Miriam; Demenescu, Liliana R.; Edgar, J. Christopher; Hertrich, Ingo; Sturm, Walter; Mathiak, Klaus

    2012-01-01

    Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing. PMID:22384105

  14. Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.

    PubMed

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer

    2010-09-29

    Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.

  15. Deviance detection based on regularity encoding along the auditory hierarchy: electrophysiological evidence in humans.

    PubMed

    Escera, Carles; Leung, Sumie; Grimm, Sabine

    2014-07-01

    Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100-200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20-30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding-rather than on refractoriness-occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.

  16. Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.

    PubMed

    de Jong, Ritske; Toffanin, Paolo; Harbers, Marten

    2010-01-01

    Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  17. Infant discrimination of rapid auditory cues predicts later language impairment.

    PubMed

    Benasich, April A; Tallal, Paula

    2002-10-17

    The etiology and mechanisms of specific language impairment (SLI) in children are unknown. Differences in basic auditory processing abilities have been suggested to underlie their language deficits. Studies suggest that the neuropathology, such as atypical patterns of cerebral lateralization and cortical cellular anomalies, implicated in such impairments likely occur early in life. Such anomalies may play a part in the rapid processing deficits seen in this disorder. However, prospective, longitudinal studies in infant populations that are critical to examining these hypotheses have not been done. In the study described, performance on brief, rapidly-presented, successive auditory processing and perceptual-cognitive tasks were assessed in two groups of infants: normal control infants with no family history of language disorders and infants from families with a positive family history for language impairment. Initial assessments were obtained when infants were 6-9 months of age (M=7.5 months) and the sample was then followed through age 36 months. At the first visit, infants' processing of rapid auditory cues as well as global processing speed and memory were assessed. Significant differences in mean thresholds were seen in infants born into families with a history of SLI as compared with controls. Examination of relations between infant processing abilities and emerging language through 24 months-of-age revealed that threshold for rapid auditory processing at 7.5 months was the single best predictor of language outcome. At age 3, rapid auditory processing threshold and being male, together predicted 39-41% of the variance in language outcome. Thus, early deficits in rapid auditory processing abilities both precede and predict subsequent language delays. These findings support an essential role for basic nonlinguistic, central auditory processes, particularly rapid spectrotemporal processing, in early language development. Further, these findings provide a temporal diagnostic window during which future language impairments may be addressed.

  18. The 5% difference: early sensory processing predicts sarcasm perception in schizophrenia and schizo-affective disorder.

    PubMed

    Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C

    2014-01-01

    Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.

  19. An Expanded Role for the Dorsal Auditory Pathway in Sensorimotor Control and Integration

    PubMed Central

    Rauschecker, Josef P.

    2010-01-01

    The dual-pathway model of auditory cortical processing assumes that two largely segregated processing streams originating in the lateral belt subserve the two main functions of hearing: identification of auditory “objects”, including speech; and localization of sounds in space (Rauschecker and Tian, 2000). Evidence has accumulated, chiefly from work in humans and nonhuman primates, that an antero-ventral pathway supports the former function, whereas a postero-dorsal stream supports the latter, i.e. processing of space and motion-in-space. In addition, the postero-dorsal stream has also been postulated to subserve some functions of speech and language in humans. A recent review (Rauschecker and Scott, 2009) has proposed the possibility that both functions of the postero-dorsal pathway can be subsumed under the same structural forward model: an efference copy sent from prefrontal and premotor cortex provides the basis for “optimal state estimation” in the inferior parietal lobe and in sensory areas of the posterior auditory cortex. The current article corroborates this model by adding and discussing recent evidence. PMID:20850511

  20. Informational Masking Effects on Neural Encoding of Stimulus Onset and Acoustic Change.

    PubMed

    Niemczak, Christopher E; Vander Werff, Kathy R

    2018-05-18

    Recent investigations using cortical auditory evoked potentials have shown masker-dependent effects on sensory cortical processing of speech information. Background noise maskers consisting of other people talking are particularly difficult for speech recognition. Behavioral studies have related this to perceptual masking, or informational masking, beyond just the overlap of the masker and target at the auditory periphery. The aim of the present study was to use cortical auditory evoked potentials, to examine how maskers (i.e., continuous speech-shaped noise [SSN] and multi-talker babble) affect the cortical sensory encoding of speech information at an obligatory level of processing. Specifically, cortical responses to vowel onset and formant change were recorded under different background noise conditions presumed to represent varying amounts of energetic or informational masking. The hypothesis was, that even at this obligatory cortical level of sensory processing, we would observe larger effects on the amplitude and latency of the onset and change components as the amount of informational masking increased across background noise conditions. Onset and change responses were recorded to a vowel change from /u-i/ in young adults under four conditions: quiet, continuous SSN, eight-talker (8T) babble, and two-talker (2T) babble. Repeated measures analyses by noise condition were conducted on amplitude, latency, and response area measurements to determine the differential effects of these noise conditions, designed to represent increasing and varying levels of informational and energetic masking, on cortical neural representation of a vowel onset and acoustic change response waveforms. All noise conditions significantly reduced onset N1 and P2 amplitudes, onset N1-P2 peak to peak amplitudes, as well as both onset and change response area compared with quiet conditions. Further, all amplitude and area measures were significantly reduced for the two babble conditions compared with continuous SSN. However, there were no significant differences in peak amplitude or area for either onset or change responses between the two different babble conditions (eight versus two talkers). Mean latencies for all onset peaks were delayed for noise conditions compared with quiet. However, in contrast to the amplitude and area results, differences in peak latency between SSN and the babble conditions did not reach statistical significance. These results support the idea that while background noise maskers generally reduce amplitude and increase latency of speech-sound evoked cortical responses, the type of masking has a significant influence. Speech babble maskers (eight talkers and two talkers) have a larger effect on the obligatory cortical response to speech sound onset and change compared with purely energetic continuous SSN maskers, which may be attributed to informational masking effects. Neither the neural responses to the onset nor the vowel change, however, were sensitive to the hypothesized increase in the amount of informational masking between speech babble maskers with two talkers compared with eight talkers.

  1. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    PubMed

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  2. Modulation of Auditory Cortex Response to Pitch Variation Following Training with Microtonal Melodies

    PubMed Central

    Zatorre, Robert J.; Delhommeau, Karine; Zarate, Jean Mary

    2012-01-01

    We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019

  3. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning

    PubMed Central

    Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359

  4. Neuronal Effects of Auditory Distraction on Visual Attention

    ERIC Educational Resources Information Center

    Smucny, Jason; Rojas, Donald C.; Eichman, Lindsay C.; Tregellas, Jason R.

    2013-01-01

    Selective attention in the presence of distraction is a key aspect of healthy cognition. The underlying neurobiological processes, have not, however, been functionally well characterized. In the present study, we used functional magnetic resonance imaging to determine how ecologically relevant distracting noise affects cortical activity in 27…

  5. Process Timing and Its Relation to the Coding of Tonal Harmony

    ERIC Educational Resources Information Center

    Aksentijevic, Aleksandar; Barber, Paul J.; Elliott, Mark A.

    2011-01-01

    Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six…

  6. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway.

    PubMed

    Yao, Justin D; Bremen, Peter; Middlebrooks, John C

    2015-12-09

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in three levels of the ascending auditory pathway with extracellular unit recordings in anesthetized rats. We found that neural SSS emerges within the ascending auditory pathway as a consequence of sharpening of spatial sensitivity and increasing forward suppression. Our results highlight brainstem mechanisms that culminate in SSS at the level of the auditory cortex. Copyright © 2015 Yao et al.

  7. Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.

    PubMed

    Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P

    2005-05-01

    The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.

  8. Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex

    PubMed Central

    Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2010-01-01

    Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments. PMID:20079343

  9. The Role of the Auditory Brainstem in Processing Musically Relevant Pitch

    PubMed Central

    Bidelman, Gavin M.

    2013-01-01

    Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294

  10. Language experience enhances early cortical pitch-dependent responses

    PubMed Central

    Krishnan, Ananthanarayan; Gandour, Jackson T.; Ananthakrishnan, Saradha; Vijayaraghavan, Venkatakrishnan

    2014-01-01

    Pitch processing at cortical and subcortical stages of processing is shaped by language experience. We recently demonstrated that specific components of the cortical pitch response (CPR) index the more rapidly-changing portions of the high rising Tone 2 of Mandarin Chinese, in addition to marking pitch onset and sound offset. In this study, we examine how language experience (Mandarin vs. English) shapes the processing of different temporal attributes of pitch reflected in the CPR components using stimuli representative of within-category variants of Tone 2. Results showed that the magnitude of CPR components (Na-Pb and Pb-Nb) and the correlation between these two components and pitch acceleration were stronger for the Chinese listeners compared to English listeners for stimuli that fell within the range of Tone 2 citation forms. Discriminant function analysis revealed that the Na-Pb component was more than twice as important as Pb-Nb in grouping listeners by language affiliation. In addition, a stronger stimulus-dependent, rightward asymmetry was observed for the Chinese group at the temporal, but not frontal, electrode sites. This finding may reflect selective recruitment of experience-dependent, pitch-specific mechanisms in right auditory cortex to extract more complex, time-varying pitch patterns. Taken together, these findings suggest that long-term language experience shapes early sensory level processing of pitch in the auditory cortex, and that the sensitivity of the CPR may vary depending on the relative linguistic importance of specific temporal attributes of dynamic pitch. PMID:25506127

  11. [Feasibility of auditory cortical stimulation for the treatment of tinnitus. Three case reports].

    PubMed

    Litré, C-F; Giersky, F; Theret, E; Leveque, M; Peruzzi, P; Rousseaux, P

    2010-08-01

    Tinnitus is a public health issue in France. Around 1 % of the population is affected and 30,000 people are handicapped in their daily life. The treatments available for disabling tinnitus have until now been disappointing. We report our experience on the treatment of these patients in neurosurgery. Between 2006 and 2008, transcranial magnetic stimulation (rTMS) was performed following several supraliminal and subliminal protocols in 16 patients whose mean age was 47 years (range, 35-71). All patients underwent anatomical and functional MRI of the auditory cortex before and 18 h after rTMS, to straddle the primary and secondary auditory cortices. All patients underwent audiometric testing by an ENT physician. Nine patients responded with rTMS. After these investigations, two quadrapolar electrodes (Resume), connected to a stimulating device implanted under the skin (Synergy, from Medtronic), were extradurally implanted in three patients. The electrodes were placed between the primary and secondary auditory cortices. The mean follow-up was 25 months and significant improvement was found in these patients. The feasibility of cortical stimulation in symptomatic treatment of tinnitus was demonstrated by this preparatory work. The intermediate- and long-term therapeutic effects remain to be evaluated. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  12. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    PubMed

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Persistent Thalamic Sound Processing Despite Profound Cochlear Denervation.

    PubMed

    Chambers, Anna R; Salazar, Juan J; Polley, Daniel B

    2016-01-01

    Neurons at higher stages of sensory processing can partially compensate for a sudden drop in peripheral input through a homeostatic plasticity process that increases the gain on weak afferent inputs. Even after a profound unilateral auditory neuropathy where >95% of afferent synapses between auditory nerve fibers and inner hair cells have been eliminated with ouabain, central gain can restore cortical processing and perceptual detection of basic sounds delivered to the denervated ear. In this model of profound auditory neuropathy, auditory cortex (ACtx) processing and perception recover despite the absence of an auditory brainstem response (ABR) or brainstem acoustic reflexes, and only a partial recovery of sound processing at the level of the inferior colliculus (IC), an auditory midbrain nucleus. In this study, we induced a profound cochlear neuropathy with ouabain and asked whether central gain enabled a compensatory plasticity in the auditory thalamus comparable to the full recovery of function previously observed in the ACtx, the partial recovery observed in the IC, or something different entirely. Unilateral ouabain treatment in adult mice effectively eliminated the ABR, yet robust sound-evoked activity persisted in a minority of units recorded from the contralateral medial geniculate body (MGB) of awake mice. Sound driven MGB units could decode moderate and high-intensity sounds with accuracies comparable to sham-treated control mice, but low-intensity classification was near chance. Pure tone receptive fields and synchronization to broadband pulse trains also persisted, albeit with significantly reduced quality and precision, respectively. MGB decoding of temporally modulated pulse trains and speech tokens were both greatly impaired in ouabain-treated mice. Taken together, the absence of an ABR belied a persistent auditory processing at the level of the MGB that was likely enabled through increased central gain. Compensatory plasticity at the level of the auditory thalamus was less robust overall than previous observations in cortex or midbrain. Hierarchical differences in compensatory plasticity following sensorineural hearing loss may reflect differences in GABA circuit organization within the MGB, as compared to the ACtx or IC.

  14. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    PubMed

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human ERP and psychophysical music listening studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Serial auditory-evoked potentials in the diagnosis and monitoring of a child with Landau-Kleffner syndrome.

    PubMed

    Plyler, Erin; Harkrider, Ashley W

    2013-01-01

    A boy, aged 2 1/2 yr, experienced sudden deterioration of speech and language abilities. He saw multiple medical professionals across 2 yr. By almost 5 yr, his vocabulary diminished from 50 words to 4, and he was referred to our speech and hearing center. The purpose of this study was to heighten awareness of Landau-Kleffner syndrome (LKS) and emphasize the importance of an objective test battery that includes serial auditory-evoked potentials (AEPs) to audiologists who often are on the front lines of diagnosis and treatment delivery when faced with a child experiencing unexplained loss of the use of speech and language. Clinical report. Interview revealed a family history of seizure disorder. Normal social behaviors were observed. Acoustic reflexes and otoacoustic emissions were consistent with normal peripheral auditory function. The child could not complete behavioral audiometric testing or auditory processing tests, so serial AEPs were used to examine central nervous system function. Normal auditory brainstem responses, a replicable Na and absent Pa of the middle latency responses, and abnormal slow cortical potentials suggested dysfunction of auditory processing at the cortical level. The child was referred to a neurologist, who confirmed LKS. At age 7 1/2 yr, after 2 1/2 yr of antiepileptic medications, electroencephalographic (EEG) and audiometric measures normalized. Presently, the child communicates manually with limited use of oral information. Audiologists often are one of the first professionals to assess children with loss of speech and language of unknown origin. Objective, noninvasive, serial AEPs are a simple and valuable addition to the central audiometric test battery when evaluating a child with speech and language regression. The inclusion of these tests will markedly increase the chance for early and accurate referral, diagnosis, and monitoring of a child with LKS which is imperative for a positive prognosis. American Academy of Audiology.

  16. Transcortical sensory aphasia: revisited and revised.

    PubMed

    Boatman, D; Gordon, B; Hart, J; Selnes, O; Miglioretti, D; Lenz, F

    2000-08-01

    Transcortical sensory aphasia (TSA) is characterized by impaired auditory comprehension with intact repetition and fluent speech. We induced TSA transiently by electrical interference during routine cortical function mapping in six adult seizure patients. For each patient, TSA was associated with multiple posterior cortical sites, including the posterior superior and middle temporal gyri, in classical Wernicke's area. A number of TSA sites were immediately adjacent to sites where Wernicke's aphasia was elicited in the same patients. Phonological decoding of speech sounds was assessed by auditory syllable discrimination and found to be intact at all sites where TSA was induced. At a subset of electrode sites where the pattern of language deficits otherwise resembled TSA, naming and word reading remained intact. Language lateralization testing by intracarotid amobarbital injection showed no evidence of independent right hemisphere language. These results suggest that TSA may result from a one-way disruption between left hemisphere phonology and lexical-semantic processing.

  17. Memory reactivation during rapid eye movement sleep promotes its generalization and integration in cortical stores.

    PubMed

    Sterpenich, Virginie; Schmidt, Christina; Albouy, Geneviève; Matarazzo, Luca; Vanhaudenhuyse, Audrey; Boveroux, Pierre; Degueldre, Christian; Leclercq, Yves; Balteau, Evelyne; Collette, Fabienne; Luxen, André; Phillips, Christophe; Maquet, Pierre

    2014-06-01

    Memory reactivation appears to be a fundamental process in memory consolidation. In this study we tested the influence of memory reactivation during rapid eye movement (REM) sleep on memory performance and brain responses at retrieval in healthy human participants. Fifty-six healthy subjects (28 women and 28 men, age [mean ± standard deviation]: 21.6 ± 2.2 y) participated in this functional magnetic resonance imaging (fMRI) study. Auditory cues were associated with pictures of faces during their encoding. These memory cues delivered during REM sleep enhanced subsequent accurate recollections but also false recognitions. These results suggest that reactivated memories interacted with semantically related representations, and induced new creative associations, which subsequently reduced the distinction between new and previously encoded exemplars. Cues had no effect if presented during stage 2 sleep, or if they were not associated with faces during encoding. Functional magnetic resonance imaging revealed that following exposure to conditioned cues during REM sleep, responses to faces during retrieval were enhanced both in a visual area and in a cortical region of multisensory (auditory-visual) convergence. These results show that reactivating memories during REM sleep enhances cortical responses during retrieval, suggesting the integration of recent memories within cortical circuits, favoring the generalization and schematization of the information.

  18. Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success

    PubMed Central

    Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.

    2013-01-01

    The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625

  19. Emergent selectivity for task-relevant stimuli in higher-order auditory cortex

    PubMed Central

    Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.

    2014-01-01

    A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467

  20. Tracking the voluntary control of auditory spatial attention with event-related brain potentials.

    PubMed

    Störmer, Viola S; Green, Jessica J; McDonald, John J

    2009-03-01

    A lateralized event-related potential (ERP) component elicited by attention-directing cues (ADAN) has been linked to frontal-lobe control but is often absent when spatial attention is deployed in the auditory modality. Here, we tested the hypothesis that ERP activity associated with frontal-lobe control of auditory spatial attention is distributed bilaterally by comparing ERPs elicited by attention-directing cues and neutral cues in a unimodal auditory task. This revealed an initial ERP positivity over the anterior scalp and a later ERP negativity over the parietal scalp. Distributed source analysis indicated that the anterior positivity was generated primarily in bilateral prefrontal cortices, whereas the more posterior negativity was generated in parietal and temporal cortices. The anterior ERP positivity likely reflects frontal-lobe attentional control, whereas the subsequent ERP negativity likely reflects anticipatory biasing of activity in auditory cortex.

  1. Information fusion via isocortex-based Area 37 modeling

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    A simplified model of information processing in the brain can be constructed using primary sensory input from two modalities (auditory and visual) and recurrent connections to the limbic subsystem. Information fusion would then occur in Area 37 of the temporal cortex. The creation of meta concepts from the low order primary inputs is managed by models of isocortex processing. Isocortex algorithms are used to model parietal (auditory), occipital (visual), temporal (polymodal fusion) cortex and the limbic system. Each of these four modules is constructed out of five cortical stacks in which each stack consists of three vertically oriented six layer isocortex models. The input to output training of each cortical model uses the OCOS (on center - off surround) and FFP (folded feedback pathway) circuitry of (Grossberg, 1) which is inherently a recurrent network type of learning characterized by the identification of perceptual groups. Models of this sort are thus closely related to cognitive models as it is difficult to divorce the sensory processing subsystems from the higher level processing in the associative cortex. The overall software architecture presented is biologically based and is presented as a potential architectural prototype for the development of novel sensory fusion strategies. The algorithms are motivated to some degree by specific data from projects on musical composition and autonomous fine art painting programs, but only in the sense that these projects use two specific types of auditory and visual cortex data. Hence, the architectures are presented for an artificial information processing system which utilizes two disparate sensory sources. The exact nature of the two primary sensory input streams is irrelevant.

  2. Auditory mismatch impairments are characterized by core neural dysfunctions in schizophrenia

    PubMed Central

    Gaebler, Arnim Johannes; Mathiak, Klaus; Koten, Jan Willem; König, Andrea Anna; Koush, Yury; Weyer, David; Depner, Conny; Matentzoglu, Simeon; Edgar, James Christopher; Willmes, Klaus; Zvyagintsev, Mikhail

    2015-01-01

    Major theories on the neural basis of schizophrenic core symptoms highlight aberrant salience network activity (insula and anterior cingulate cortex), prefrontal hypoactivation, sensory processing deficits as well as an impaired connectivity between temporal and prefrontal cortices. The mismatch negativity is a potential biomarker of schizophrenia and its reduction might be a consequence of each of these mechanisms. In contrast to the previous electroencephalographic studies, functional magnetic resonance imaging may disentangle the involved brain networks at high spatial resolution and determine contributions from localized brain responses and functional connectivity to the schizophrenic impairments. Twenty-four patients and 24 matched control subjects underwent functional magnetic resonance imaging during an optimized auditory mismatch task. Haemodynamic responses and functional connectivity were compared between groups. These data sets further entered a diagnostic classification analysis to assess impairments on the individual patient level. In the control group, mismatch responses were detected in the auditory cortex, prefrontal cortex and the salience network (insula and anterior cingulate cortex). Furthermore, mismatch processing was associated with a deactivation of the visual system and the dorsal attention network indicating a shift of resources from the visual to the auditory domain. The patients exhibited reduced activation in all of the respective systems (right auditory cortex, prefrontal cortex, and the salience network) as well as reduced deactivation of the visual system and the dorsal attention network. Group differences were most prominent in the anterior cingulate cortex and adjacent prefrontal areas. The latter regions also exhibited a reduced functional connectivity with the auditory cortex in the patients. In the classification analysis, haemodynamic responses yielded a maximal accuracy of 83% based on four features; functional connectivity data performed similarly or worse for up to about 10 features. However, connectivity data yielded a better performance when including more than 10 features yielding up to 90% accuracy. Among others, the most discriminating features represented functional connections between the auditory cortex and the anterior cingulate cortex as well as adjacent prefrontal areas. Auditory mismatch impairments incorporate major neural dysfunctions in schizophrenia. Our data suggest synergistic effects of sensory processing deficits, aberrant salience attribution, prefrontal hypoactivation as well as a disrupted connectivity between temporal and prefrontal cortices. These deficits are associated with subsequent disturbances in modality-specific resource allocation. Capturing different schizophrenic core dysfunctions, functional magnetic resonance imaging during this optimized mismatch paradigm reveals processing impairments on the individual patient level, rendering it a potential biomarker of schizophrenia. PMID:25743635

  3. A magnetoencephalography study of multi-modal processing of pain anticipation in primary sensory cortices.

    PubMed

    Gopalakrishnan, R; Burgess, R C; Plow, E B; Floden, D P; Machado, A G

    2015-09-24

    Pain anticipation plays a critical role in pain chronification and results in disability due to pain avoidance. It is important to understand how different sensory modalities (auditory, visual or tactile) may influence pain anticipation as different strategies could be applied to mitigate anticipatory phenomena and chronification. In this study, using a countdown paradigm, we evaluated with magnetoencephalography the neural networks associated with pain anticipation elicited by different sensory modalities in normal volunteers. When encountered with well-established cues that signaled pain, visual and somatosensory cortices engaged the pain neuromatrix areas early during the countdown process, whereas the auditory cortex displayed delayed processing. In addition, during pain anticipation, the visual cortex displayed independent processing capabilities after learning the contextual meaning of cues from associative and limbic areas. Interestingly, cross-modal activation was also evident and strong when visual and tactile cues signaled upcoming pain. Dorsolateral prefrontal cortex and mid-cingulate cortex showed significant activity during pain anticipation regardless of modality. Our results show pain anticipation is processed with great time efficiency by a highly specialized and hierarchical network. The highest degree of higher-order processing is modulated by context (pain) rather than content (modality) and rests within the associative limbic regions, corroborating their intrinsic role in chronification. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. A Meta-Analytic Study of the Neural Systems for Auditory Processing of Lexical Tones.

    PubMed

    Kwok, Veronica P Y; Dan, Guo; Yakpo, Kofi; Matthews, Stephen; Fox, Peter T; Li, Ping; Tan, Li-Hai

    2017-01-01

    The neural systems of lexical tone processing have been studied for many years. However, previous findings have been mixed with regard to the hemispheric specialization for the perception of linguistic pitch patterns in native speakers of tonal language. In this study, we performed two activation likelihood estimation (ALE) meta-analyses, one on neuroimaging studies of auditory processing of lexical tones in tonal languages (17 studies), and the other on auditory processing of lexical information in non-tonal languages as a control analysis for comparison (15 studies). The lexical tone ALE analysis showed significant brain activations in bilateral inferior prefrontal regions, bilateral superior temporal regions and the right caudate, while the control ALE analysis showed significant cortical activity in the left inferior frontal gyrus and left temporo-parietal regions. However, we failed to obtain significant differences from the contrast analysis between two auditory conditions, which might be caused by the limited number of studies available for comparison. Although the current study lacks evidence to argue for a lexical tone specific activation pattern, our results provide clues and directions for future investigations on this topic, more sophisticated methods are needed to explore this question in more depth as well.

  5. A Meta-Analytic Study of the Neural Systems for Auditory Processing of Lexical Tones

    PubMed Central

    Kwok, Veronica P. Y.; Dan, Guo; Yakpo, Kofi; Matthews, Stephen; Fox, Peter T.; Li, Ping; Tan, Li-Hai

    2017-01-01

    The neural systems of lexical tone processing have been studied for many years. However, previous findings have been mixed with regard to the hemispheric specialization for the perception of linguistic pitch patterns in native speakers of tonal language. In this study, we performed two activation likelihood estimation (ALE) meta-analyses, one on neuroimaging studies of auditory processing of lexical tones in tonal languages (17 studies), and the other on auditory processing of lexical information in non-tonal languages as a control analysis for comparison (15 studies). The lexical tone ALE analysis showed significant brain activations in bilateral inferior prefrontal regions, bilateral superior temporal regions and the right caudate, while the control ALE analysis showed significant cortical activity in the left inferior frontal gyrus and left temporo-parietal regions. However, we failed to obtain significant differences from the contrast analysis between two auditory conditions, which might be caused by the limited number of studies available for comparison. Although the current study lacks evidence to argue for a lexical tone specific activation pattern, our results provide clues and directions for future investigations on this topic, more sophisticated methods are needed to explore this question in more depth as well. PMID:28798670

  6. Cross-Modal Multivariate Pattern Analysis

    PubMed Central

    Meyer, Kaspar; Kaplan, Jonas T.

    2011-01-01

    Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices. PMID:22105246

  7. Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients.

    PubMed

    Rouger, Julien; Lagleyre, Sébastien; Démonet, Jean-François; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2012-08-01

    Psychophysical and neuroimaging studies in both animal and human subjects have clearly demonstrated that cortical plasticity following sensory deprivation leads to a brain functional reorganization that favors the spared modalities. In postlingually deaf patients, the use of a cochlear implant (CI) allows a recovery of the auditory function, which will probably counteract the cortical crossmodal reorganization induced by hearing loss. To study the dynamics of such reversed crossmodal plasticity, we designed a longitudinal neuroimaging study involving the follow-up of 10 postlingually deaf adult CI users engaged in a visual speechreading task. While speechreading activates Broca's area in normally hearing subjects (NHS), the activity level elicited in this region in CI patients is abnormally low and increases progressively with post-implantation time. Furthermore, speechreading in CI patients induces abnormal crossmodal activations in right anterior regions of the superior temporal cortex normally devoted to processing human voice stimuli (temporal voice-sensitive areas-TVA). These abnormal activity levels diminish with post-implantation time and tend towards the levels observed in NHS. First, our study revealed that the neuroplasticity after cochlear implantation involves not only auditory but also visual and audiovisual speech processing networks. Second, our results suggest that during deafness, the functional links between cortical regions specialized in face and voice processing are reallocated to support speech-related visual processing through cross-modal reorganization. Such reorganization allows a more efficient audiovisual integration of speech after cochlear implantation. These compensatory sensory strategies are later completed by the progressive restoration of the visuo-audio-motor speech processing loop, including Broca's area. Copyright © 2011 Wiley Periodicals, Inc.

  8. Brain dynamics that correlate with effects of learning on auditory distance perception.

    PubMed

    Wisniewski, Matthew G; Mercado, Eduardo; Church, Barbara A; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  9. The steady-state response of the cerebral cortex to the beat of music reflects both the comprehension of music and attention

    PubMed Central

    Meltzer, Benjamin; Reichenbach, Chagit S.; Braiman, Chananel; Schiff, Nicholas D.; Hudspeth, A. J.; Reichenbach, Tobias

    2015-01-01

    The brain’s analyses of speech and music share a range of neural resources and mechanisms. Music displays a temporal structure of complexity similar to that of speech, unfolds over comparable timescales, and elicits cognitive demands in tasks involving comprehension and attention. During speech processing, synchronized neural activity of the cerebral cortex in the delta and theta frequency bands tracks the envelope of a speech signal, and this neural activity is modulated by high-level cortical functions such as speech comprehension and attention. It remains unclear, however, whether the cortex also responds to the natural rhythmic structure of music and how the response, if present, is influenced by higher cognitive processes. Here we employ electroencephalography to show that the cortex responds to the beat of music and that this steady-state response reflects musical comprehension and attention. We show that the cortical response to the beat is weaker when subjects listen to a familiar tune than when they listen to an unfamiliar, non-sensical musical piece. Furthermore, we show that in a task of intermodal attention there is a larger neural response at the beat frequency when subjects attend to a musical stimulus than when they ignore the auditory signal and instead focus on a visual one. Our findings may be applied in clinical assessments of auditory processing and music cognition as well as in the construction of auditory brain-machine interfaces. PMID:26300760

  10. The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex

    PubMed Central

    Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J

    2014-01-01

    Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. PMID:24945075

  11. Information flow in the auditory cortical network

    PubMed Central

    Hackett, Troy A.

    2011-01-01

    Auditory processing in the cerebral cortex is comprised of an interconnected network of auditory and auditory-related areas distributed throughout the forebrain. The nexus of auditory activity is located in temporal cortex among several specialized areas, or fields, that receive dense inputs from the medial geniculate complex. These areas are collectively referred to as auditory cortex. Auditory activity is extended beyond auditory cortex via connections with auditory-related areas elsewhere in the cortex. Within this network, information flows between areas to and from countless targets, but in a manner that is characterized by orderly regional, areal and laminar patterns. These patterns reflect some of the structural constraints that passively govern the flow of information at all levels of the network. In addition, the exchange of information within these circuits is dynamically regulated by intrinsic neurochemical properties of projecting neurons and their targets. This article begins with an overview of the principal circuits and how each is related to information flow along major axes of the network. The discussion then turns to a description of neurochemical gradients along these axes, highlighting recent work on glutamate transporters in the thalamocortical projections to auditory cortex. The article concludes with a brief discussion of relevant neurophysiological findings as they relate to structural gradients in the network. PMID:20116421

  12. Auditory cortical activity after intracortical microstimulation and its role for sensory processing and learning.

    PubMed

    Deliano, Matthias; Scheich, Henning; Ohl, Frank W

    2009-12-16

    Several studies have shown that animals can learn to make specific use of intracortical microstimulation (ICMS) of sensory cortex within behavioral tasks. Here, we investigate how the focal, artificial activation by ICMS leads to a meaningful, behaviorally interpretable signal. In natural learning, this involves large-scale activity patterns in widespread brain-networks. We therefore trained gerbils to discriminate closely neighboring ICMS sites within primary auditory cortex producing evoked responses largely overlapping in space. In parallel, during training, we recorded electrocorticograms (ECoGs) at high spatial resolution. Applying a multivariate classification procedure, we identified late spatial patterns that emerged with discrimination learning from the ongoing poststimulus ECoG. These patterns contained information about the preceding conditioned stimulus, and were associated with a subsequent correct behavioral response by the animal. Thereby, relevant pattern information was mainly carried by neuron populations outside the range of the lateral spatial spread of ICMS-evoked cortical activation (approximately 1.2 mm). This demonstrates that the stimulated cortical area not only encoded information about the stimulation sites by its focal, stimulus-driven activation, but also provided meaningful signals in its ongoing activity related to the interpretation of ICMS learned by the animal. This involved the stimulated area as a whole, and apparently required large-scale integration in the brain. However, ICMS locally interfered with the ongoing cortical dynamics by suppressing pattern formation near the stimulation sites. The interaction between ICMS and ongoing cortical activity has several implications for the design of ICMS protocols and cortical neuroprostheses, since the meaningful interpretation of ICMS depends on this interaction.

  13. Auditory processing in absolute pitch possessors

    NASA Astrophysics Data System (ADS)

    McKetton, Larissa; Schneider, Keith A.

    2018-05-01

    Absolute pitch (AP) is a rare ability in classifying a musical pitch without a reference standard. It has been of great interest to researchers studying auditory processing and music cognition since it is seldom expressed and sheds light on influences pertaining to neurodevelopmental biological predispositions and the onset of musical training. We investigated the smallest frequency that could be detected or just noticeable difference (JND) between two pitches. Here, we report significant differences in JND thresholds in AP musicians and non-AP musicians compared to non-musician control groups at both 1000 Hz and 987.76 Hz testing frequencies. Although the AP-musicians did better than non-AP musicians, the difference was not significant. In addition, we looked at neuro-anatomical correlates of musicianship and AP using structural MRI. We report increased cortical thickness of the left Heschl's Gyrus (HG) and decreased cortical thickness of the inferior frontal opercular gyrus (IFO) and circular insular sulcus volume (CIS) in AP compared to non-AP musicians and controls. These structures may therefore be optimally enhanced and reduced to form the most efficient network for AP to emerge.

  14. Preferred Tempo and Low-Audio-Frequency Bias Emerge From Simulated Sub-cortical Processing of Sounds With a Musical Beat

    PubMed Central

    Zuk, Nathaniel J.; Carney, Laurel H.; Lalor, Edmund C.

    2018-01-01

    Prior research has shown that musical beats are salient at the level of the cortex in humans. Yet below the cortex there is considerable sub-cortical processing that could influence beat perception. Some biases, such as a tempo preference and an audio frequency bias for beat timing, could result from sub-cortical processing. Here, we used models of the auditory-nerve and midbrain-level amplitude modulation filtering to simulate sub-cortical neural activity to various beat-inducing stimuli, and we used the simulated activity to determine the tempo or beat frequency of the music. First, irrespective of the stimulus being presented, the preferred tempo was around 100 beats per minute, which is within the range of tempi where tempo discrimination and tapping accuracy are optimal. Second, sub-cortical processing predicted a stronger influence of lower audio frequencies on beat perception. However, the tempo identification algorithm that was optimized for simple stimuli often failed for recordings of music. For music, the most highly synchronized model activity occurred at a multiple of the beat frequency. Using bottom-up processes alone is insufficient to produce beat-locked activity. Instead, a learned and possibly top-down mechanism that scales the synchronization frequency to derive the beat frequency greatly improves the performance of tempo identification. PMID:29896080

  15. Brain processing of meter and rhythm in music. Electrophysiological evidence of a common network.

    PubMed

    Kuck, Heleln; Grossbach, Michael; Bangert, Marc; Altenmüller, Eckart

    2003-11-01

    To determine cortical structures involved in "global" meter and "local" rhythm processing, slow brain potentials (DC potentials) were recorded from the scalp of 18 musically trained subjects while listening to pairs of monophonic sequences with both metric structure and rhythmic variations. The second sequence could be either identical to or different from the first one. Differences were either of a metric or a rhythmic nature. The subjects' task was to judge whether the sequences were identical or not. During processing of the auditory tasks, brain activation patterns along with the subjects' performance were assessed using 32-channel DC electroencephalography. Data were statistically analyzed using MANOVA. Processing of both meter and rhythm produced sustained cortical activation over bilateral frontal and temporal brain regions. A shift towards right hemispheric activation was pronounced during presentation of the second stimulus. Processing of rhythmic differences yielded a more centroparietal activation compared to metric processing. These results do not support Lerdhal and Jackendoff's two-component model, predicting a dissociation of left hemispheric rhythm and right hemispheric meter processing. We suggest that the uniform right temporofrontal predominance reflects auditory working memory and a pattern recognition module, which participates in both rhythm and meter processing. More pronounced parietal activation during rhythm processing may be related to switching of task-solving strategies towards mental imagination of the score.

  16. Background sounds contribute to spectrotemporal plasticity in primary auditory cortex

    PubMed Central

    Moucha, Raluca; Pandya, Pritesh K.; Engineer, Navzer D.; Rathbun, Daniel L.

    2010-01-01

    The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8–4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity PMID:15616812

  17. Hearing loss in older adults affects neural systems supporting speech comprehension.

    PubMed

    Peelle, Jonathan E; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur

    2011-08-31

    Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.

  18. Hearing loss in older adults affects neural systems supporting speech comprehension

    PubMed Central

    Peelle, Jonathan E.; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur

    2011-01-01

    Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging (fMRI) to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry (VBM), demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task. PMID:21880924

  19. A comprehensive three-dimensional cortical map of vowel space.

    PubMed

    Scharinger, Mathias; Idsardi, William J; Poe, Samantha

    2011-12-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

  20. Statistical learning of multisensory regularities is enhanced in musicians: An MEG study.

    PubMed

    Paraskevopoulos, Evangelos; Chalas, Nikolas; Kartsidis, Panagiotis; Wollbrink, Andreas; Bamidis, Panagiotis

    2018-07-15

    The present study used magnetoencephalography (MEG) to identify the neural correlates of audiovisual statistical learning, while disentangling the differential contributions of uni- and multi-modal statistical mismatch responses in humans. The applied paradigm was based on a combination of a statistical learning paradigm and a multisensory oddball one, combining an audiovisual, an auditory and a visual stimulation stream, along with the corresponding deviances. Plasticity effects due to musical expertise were investigated by comparing the behavioral and MEG responses of musicians to non-musicians. The behavioral results indicated that the learning was successful for both musicians and non-musicians. The unimodal MEG responses are consistent with previous studies, revealing the contribution of Heschl's gyrus for the identification of auditory statistical mismatches and the contribution of medial temporal and visual association areas for the visual modality. The cortical network underlying audiovisual statistical learning was found to be partly common and partly distinct from the corresponding unimodal networks, comprising right temporal and left inferior frontal sources. Musicians showed enhanced activation in superior temporal and superior frontal gyrus. Connectivity and information processing flow amongst the sources comprising the cortical network of audiovisual statistical learning, as estimated by transfer entropy, was reorganized in musicians, indicating enhanced top-down processing. This neuroplastic effect showed a cross-modal stability between the auditory and audiovisual modalities. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Temporal Lobe Epilepsy Alters Auditory-motor Integration For Voice Control

    PubMed Central

    Li, Weifeng; Chen, Ziyi; Yan, Nan; Jones, Jeffery A.; Guo, Zhiqiang; Huang, Xiyan; Chen, Shaozhen; Liu, Peng; Liu, Hanjun

    2016-01-01

    Temporal lobe epilepsy (TLE) is the most common drug-refractory focal epilepsy in adults. Previous research has shown that patients with TLE exhibit decreased performance in listening to speech sounds and deficits in the cortical processing of auditory information. Whether TLE compromises auditory-motor integration for voice control, however, remains largely unknown. To address this question, event-related potentials (ERPs) and vocal responses to vocal pitch errors (1/2 or 2 semitones upward) heard in auditory feedback were compared across 28 patients with TLE and 28 healthy controls. Patients with TLE produced significantly larger vocal responses but smaller P2 responses than healthy controls. Moreover, patients with TLE exhibited a positive correlation between vocal response magnitude and baseline voice variability and a negative correlation between P2 amplitude and disease duration. Graphical network analyses revealed a disrupted neuronal network for patients with TLE with a significant increase of clustering coefficients and path lengths as compared to healthy controls. These findings provide strong evidence that TLE is associated with an atypical integration of the auditory and motor systems for vocal pitch regulation, and that the functional networks that support the auditory-motor processing of pitch feedback errors differ between patients with TLE and healthy controls. PMID:27356768

  2. Auditory evoked potentials to abrupt pitch and timbre change of complex tones: electrophysiological evidence of 'streaming'?

    PubMed

    Jones, S J; Longe, O; Vaz Pato, M

    1998-03-01

    Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.

  3. Relational Associative Learning Induces Cross-Modal Plasticity in Early Visual Cortex

    PubMed Central

    Headley, Drew B.; Weinberger, Norman M.

    2015-01-01

    Neurobiological theories of memory posit that the neocortex is a storage site of declarative memories, a hallmark of which is the association of two arbitrary neutral stimuli. Early sensory cortices, once assumed uninvolved in memory storage, recently have been implicated in associations between neutral stimuli and reward or punishment. We asked whether links between neutral stimuli also could be formed in early visual or auditory cortices. Rats were presented with a tone paired with a light using a sensory preconditioning paradigm that enabled later evaluation of successful association. Subjects that acquired this association developed enhanced sound evoked potentials in their primary and secondary visual cortices. Laminar recordings localized this potential to cortical Layers 5 and 6. A similar pattern of activation was elicited by microstimulation of primary auditory cortex in the same subjects, consistent with a cortico-cortical substrate of association. Thus, early sensory cortex has the capability to form neutral stimulus associations. This plasticity may constitute a declarative memory trace between sensory cortices. PMID:24275832

  4. Sensory-to-motor integration during auditory repetition: a combined fMRI and lesion study

    PubMed Central

    Parker Jones, ‘Ōiwi; Prejawa, Susan; Hope, Thomas M. H.; Oberhuber, Marion; Seghier, Mohamed L.; Leff, Alex P.; Green, David W.; Price, Cathy J.

    2014-01-01

    The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction (TPJ), referred to as Sylvian-parietal-temporal region (Spt), reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in eight patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e., conduction aphasia). All eight patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the eight patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing. PMID:24550807

  5. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    PubMed

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Relationship between brainstem, cortical and behavioral measures relevant to pitch salience in humans.

    PubMed

    Krishnan, Ananthanarayan; Bidelman, Gavin M; Smalt, Christopher J; Ananthakrishnan, Saradha; Gandour, Jackson T

    2012-10-01

    Neural representation of pitch-relevant information at both the brainstem and cortical levels of processing is influenced by language or music experience. However, the functional roles of brainstem and cortical neural mechanisms in the hierarchical network for language processing, and how they drive and maintain experience-dependent reorganization are not known. In an effort to evaluate the possible interplay between these two levels of pitch processing, we introduce a novel electrophysiological approach to evaluate pitch-relevant neural activity at the brainstem and auditory cortex concurrently. Brainstem frequency-following responses and cortical pitch responses were recorded from participants in response to iterated rippled noise stimuli that varied in stimulus periodicity (pitch salience). A control condition using iterated rippled noise devoid of pitch was employed to ensure pitch specificity of the cortical pitch response. Neural data were compared with behavioral pitch discrimination thresholds. Results showed that magnitudes of neural responses increase systematically and that behavioral pitch discrimination improves with increasing stimulus periodicity, indicating more robust encoding for salient pitch. Absence of cortical pitch response in the control condition confirms that the cortical pitch response is specific to pitch. Behavioral pitch discrimination was better predicted by brainstem and cortical responses together as compared to each separately. The close correspondence between neural and behavioral data suggest that neural correlates of pitch salience that emerge in early, preattentive stages of processing in the brainstem may drive and maintain with high fidelity the early cortical representations of pitch. These neural representations together contain adequate information for the development of perceptual pitch salience. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Plasticity in the Developing Auditory Cortex: Evidence from Children with Sensorineural Hearing Loss and Auditory Neuropathy Spectrum Disorder

    PubMed Central

    Cardon, Garrett; Campbell, Julia; Sharma, Anu

    2013-01-01

    The developing auditory cortex is highly plastic. As such, the cortex is both primed to mature normally and at risk for re-organizing abnormally, depending upon numerous factors that determine central maturation. From a clinical perspective, at least two major components of development can be manipulated: 1) input to the cortex and 2) the timing of cortical input. Children with sensorineural hearing loss (SNHL) and auditory neuropathy spectrum disorder (ANSD) have provided a model of early deprivation of sensory input to the cortex, and demonstrated the resulting plasticity and development that can occur upon introduction of stimulation. In this article, we review several fundamental principles of cortical development and plasticity and discuss the clinical applications in children with SNHL and ANSD who receive intervention with hearing aids and/or cochlear implants. PMID:22668761

  8. Divergent Human Cortical Regions for Processing Distinct Acoustic-Semantic Categories of Natural Sounds: Animal Action Sounds vs. Vocalizations

    PubMed Central

    Webster, Paula J.; Skipper-Kallal, Laura M.; Frum, Chris A.; Still, Hayley N.; Ward, B. Douglas; Lewis, James W.

    2017-01-01

    A major gap in our understanding of natural sound processing is knowledge of where or how in a cortical hierarchy differential processing leads to categorical perception at a semantic level. Here, using functional magnetic resonance imaging (fMRI) we sought to determine if and where cortical pathways in humans might diverge for processing action sounds vs. vocalizations as distinct acoustic-semantic categories of real-world sound when matched for duration and intensity. This was tested by using relatively less semantically complex natural sounds produced by non-conspecific animals rather than humans. Our results revealed a striking double-dissociation of activated networks bilaterally. This included a previously well described pathway preferential for processing vocalization signals directed laterally from functionally defined primary auditory cortices to the anterior superior temporal gyri, and a less well-described pathway preferential for processing animal action sounds directed medially to the posterior insulae. We additionally found that some of these regions and associated cortical networks showed parametric sensitivity to high-order quantifiable acoustic signal attributes and/or to perceptual features of the natural stimuli, such as the degree of perceived recognition or intentional understanding. Overall, these results supported a neurobiological theoretical framework for how the mammalian brain may be fundamentally organized to process acoustically and acoustic-semantically distinct categories of ethologically valid, real-world sounds. PMID:28111538

  9. Speech processing: from peripheral to hemispheric asymmetry of the auditory system.

    PubMed

    Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier

    2012-01-01

    Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  10. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Borland, Michael S.; Buell, Elizabeth P.; Centanni, Tracy M.; Fink, Melyssa K.; Im, Kwok W.; Wilson, Linda G.; Kilgard, Michael P.

    2015-01-01

    Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial. PMID:26321676

  11. Interhemispheric transfer time in patients with auditory hallucinations: an auditory event-related potential study.

    PubMed

    Henshall, Katherine R; Sergejew, Alex A; McKay, Colette M; Rance, Gary; Shea, Tracey L; Hayden, Melissa J; Innes-Brown, Hamish; Copolov, David L

    2012-05-01

    Central auditory processing in schizophrenia patients with a history of auditory hallucinations has been reported to be impaired, and abnormalities of interhemispheric transfer have been implicated in these patients. This study examined interhemispheric functional connectivity between auditory cortical regions, using temporal information obtained from latency measures of the auditory N1 evoked potential. Interhemispheric Transfer Times (IHTTs) were compared across 3 subject groups: schizophrenia patients who had experienced auditory hallucinations, schizophrenia patients without a history of auditory hallucinations, and normal controls. Pure tones and single-syllable words were presented monaurally to each ear, while EEG was recorded continuously. IHTT was calculated for each stimulus type by comparing the latencies of the auditory N1 evoked potential recorded contralaterally and ipsilaterally to the ear of stimulation. The IHTTs for pure tones did not differ between groups. For word stimuli, the IHTT was significantly different across the 3 groups: the IHTT was close to zero in normal controls, was highest in the AH group, and was negative (shorter latencies ipsilaterally) in the nonAH group. Differences in IHTTs may be attributed to transcallosal dysfunction in the AH group, but altered or reversed cerebral lateralization in nonAH participants is also possible. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. On the definition and interpretation of voice selective activation in the temporal cortex

    PubMed Central

    Bethmann, Anja; Brechmann, André

    2014-01-01

    Regions along the superior temporal sulci and in the anterior temporal lobes have been found to be involved in voice processing. It has even been argued that parts of the temporal cortices serve as voice-selective areas. Yet, evidence for voice-selective activation in the strict sense is still missing. The current fMRI study aimed at assessing the degree of voice-specific processing in different parts of the superior and middle temporal cortices. To this end, voices of famous persons were contrasted with widely different categories, which were sounds of animals and musical instruments. The argumentation was that only brain regions with statistically proven absence of activation by the control stimuli may be considered as candidates for voice-selective areas. Neural activity was found to be stronger in response to human voices in all analyzed parts of the temporal lobes except for the middle and posterior STG. More importantly, the activation differences between voices and the other environmental sounds increased continuously from the mid-posterior STG to the anterior MTG. Here, only voices but not the control stimuli excited an increase of the BOLD response above a resting baseline level. The findings are discussed with reference to the function of the anterior temporal lobes in person recognition and the general question on how to define selectivity of brain regions for a specific class of stimuli or tasks. In addition, our results corroborate recent assumptions about the hierarchical organization of auditory processing building on a processing stream from the primary auditory cortices to anterior portions of the temporal lobes. PMID:25071527

  13. Cellular generators of the cortical auditory evoked potential initial component.

    PubMed

    Steinschneider, M; Tenke, C E; Schroeder, C E; Javitt, D C; Simpson, G V; Arezzo, J C; Vaughan, H G

    1992-01-01

    Cellular generators of the initial cortical auditory evoked potential (AEP) component were determined by analyzing laminar profiles of click-evoked AEPs, current source density, and multiple unit activity (MUA) in primary auditory cortex of awake monkeys. The initial AEP component is a surface-negative wave, N8, that peaks at 8-9 msec and inverts in polarity below lamina 4. N8 is generated by a lamina 4 current sink and a deeper current source. Simultaneous MUA is present from lower lamina 3 to the subjacent white matter. Findings indicate that thalamocortical afferents are a generator of N8 and support a role for lamina 4 stellate cells. Relationships to the human AEP are discussed.

  14. Strain differences of the effect of enucleation and anophthalmia on the size and growth of sensory cortices in mice.

    PubMed

    Massé, Ian O; Guillemette, Sonia; Laramée, Marie-Eve; Bronchti, Gilles; Boire, Denis

    2014-11-07

    Anophthalmia is a condition in which the eye does not develop from the early embryonic period. Early blindness induces cross-modal plastic modifications in the brain such as auditory and haptic activations of the visual cortex and also leads to a greater solicitation of the somatosensory and auditory cortices. The visual cortex is activated by auditory stimuli in anophthalmic mice and activity is known to alter the growth pattern of the cerebral cortex. The size of the primary visual, auditory and somatosensory cortices and of the corresponding specific sensory thalamic nuclei were measured in intact and enucleated C57Bl/6J mice and in ZRDCT anophthalmic mice (ZRDCT/An) to evaluate the contribution of cross-modal activity on the growth of the cerebral cortex. In addition, the size of these structures were compared in intact, enucleated and anophthalmic fourth generation backcrossed hybrid C57Bl/6J×ZRDCT/An mice to parse out the effects of mouse strains and of the different visual deprivations. The visual cortex was smaller in the anophthalmic ZRDCT/An than in the intact and enucleated C57Bl/6J mice. Also the auditory cortex was larger and the somatosensory cortex smaller in the ZRDCT/An than in the intact and enucleated C57Bl/6J mice. The size differences of sensory cortices between the enucleated and anophthalmic mice were no longer present in the hybrid mice, showing specific genetic differences between C57Bl/6J and ZRDCT mice. The post natal size increase of the visual cortex was less in the enucleated than in the anophthalmic and intact hybrid mice. This suggests differences in the activity of the visual cortex between enucleated and anophthalmic mice and that early in-utero spontaneous neural activity in the visual system contributes to the shaping of functional properties of cortical networks. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Topographic Distribution of Stimulus-Specific Adaptation across Auditory Cortical Fields in the Anesthetized Rat

    PubMed Central

    Nieto-Diego, Javier; Malmierca, Manuel S.

    2016-01-01

    Stimulus-specific adaptation (SSA) in single neurons of the auditory cortex was suggested to be a potential neural correlate of the mismatch negativity (MMN), a widely studied component of the auditory event-related potentials (ERP) that is elicited by changes in the auditory environment. However, several aspects on this SSA/MMN relation remain unresolved. SSA occurs in the primary auditory cortex (A1), but detailed studies on SSA beyond A1 are lacking. To study the topographic organization of SSA, we mapped the whole rat auditory cortex with multiunit activity recordings, using an oddball paradigm. We demonstrate that SSA occurs outside A1 and differs between primary and nonprimary cortical fields. In particular, SSA is much stronger and develops faster in the nonprimary than in the primary fields, paralleling the organization of subcortical SSA. Importantly, strong SSA is present in the nonprimary auditory cortex within the latency range of the MMN in the rat and correlates with an MMN-like difference wave in the simultaneously recorded local field potentials (LFP). We present new and strong evidence linking SSA at the cellular level to the MMN, a central tool in cognitive and clinical neuroscience. PMID:26950883

  16. Music training alters the course of adolescent auditory development.

    PubMed

    Tierney, Adam T; Krizman, Jennifer; Kraus, Nina

    2015-08-11

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.

  17. Music training alters the course of adolescent auditory development

    PubMed Central

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  18. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    PubMed Central

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  19. Involvement of the human midbrain and thalamus in auditory deviance detection.

    PubMed

    Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César

    2015-02-01

    Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Assessment of cortical auditory evoked potentials in children with specific language impairment.

    PubMed

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Pilka, Adam; Skarżyński, Henryk

    2018-02-28

    The proper course of speech development heavily influences the cognitive and personal development of children. It is a condition for achieving preschool and school successes - it facilitates socializing and expressing feelings and needs. Impairment of language and its development in children represents a major diagnostic and therapeutic challenge for physicians and therapists. Early diagnosis of coexisting deficits and starting the therapy influence the therapeutic success. One of the basic diagnostic tests for children suffering from specific language impairment (SLI) is audiometry, thus far referred to as a hearing test. Auditory processing is just as important as a proper hearing threshold. Therefore, diagnosis of central auditory disorder may be a valuable supplementation of diagnosis of language impairment. Early diagnosis and implementation of appropriate treatment may contribute to an effective language therapy.

  1. The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex.

    PubMed

    Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J

    2014-09-01

    Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. © 2014 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. 40 Hz Auditory Steady-State Response Is a Pharmacodynamic Biomarker for Cortical NMDA Receptors.

    PubMed

    Sivarao, Digavalli V; Chen, Ping; Senapati, Arun; Yang, Yili; Fernandes, Alda; Benitex, Yulia; Whiterock, Valerie; Li, Yu-Wen; Ahlijanian, Michael K

    2016-08-01

    Schizophrenia patients exhibit dysfunctional gamma oscillations in response to simple auditory stimuli or more complex cognitive tasks, a phenomenon explained by reduced NMDA transmission within inhibitory/excitatory cortical networks. Indeed, a simple steady-state auditory click stimulation paradigm at gamma frequency (~40 Hz) has been reproducibly shown to reduce entrainment as measured by electroencephalography (EEG) in patients. However, some investigators have reported increased phase locking factor (PLF) and power in response to 40 Hz auditory stimulus in patients. Interestingly, preclinical literature also reflects this contradiction. We investigated whether a graded deficiency in NMDA transmission can account for such disparate findings by administering subanesthetic ketamine (1-30 mg/kg, i.v.) or vehicle to conscious rats (n=12) and testing their EEG entrainment to 40 Hz click stimuli at various time points (~7-62 min after treatment). In separate cohorts, we examined in vivo NMDA channel occupancy and tissue exposure to contextualize ketamine effects. We report a robust inverse relationship between PLF and NMDA occupancy 7 min after dosing. Moreover, ketamine could produce inhibition or disinhibition of the 40 Hz response in a temporally dynamic manner. These results provide for the first time empirical data to understand how cortical NMDA transmission deficit may lead to opposite modulation of the auditory steady-state response (ASSR). Importantly, our findings posit that 40 Hz ASSR is a pharmacodynamic biomarker for cortical NMDA function that is also robustly translatable. Besides schizophrenia, such a functional biomarker may be of value to neuropsychiatric disorders like bipolar and autism spectrum where 40 Hz ASSR deficits have been documented.

  3. Can You Hear Me Now? Musical Training Shapes Functional Brain Networks for Selective Auditory Attention and Hearing Speech in Noise

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2011-01-01

    Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636

  4. Modality-specificity of sensory aging in vision and audition: evidence from event-related potentials.

    PubMed

    Ceponiene, R; Westerfield, M; Torki, M; Townsend, J

    2008-06-18

    Major accounts of aging implicate changes in processing external stimulus information. Little is known about differential effects of auditory and visual sensory aging, and the mechanisms of sensory aging are still poorly understood. Using event-related potentials (ERPs) elicited by unattended stimuli in younger (M=25.5 yrs) and older (M=71.3 yrs) subjects, this study examined mechanisms of sensory aging under minimized attention conditions. Auditory and visual modalities were examined to address modality-specificity vs. generality of sensory aging. Between-modality differences were robust. The earlier-latency responses (P1, N1) were unaffected in the auditory modality but were diminished in the visual modality. The auditory N2 and early visual N2 were diminished. Two similarities between the modalities were age-related enhancements in the late P2 range and positive behavior-early N2 correlation, the latter suggesting that N2 may reflect long-latency inhibition of irrelevant stimuli. Since there is no evidence for salient differences in neuro-biological aging between the two sensory regions, the observed between-modality differences are best explained by the differential reliance of auditory and visual systems on attention. Visual sensory processing relies on facilitation by visuo-spatial attention, withdrawal of which appears to be more disadvantageous in older populations. In contrast, auditory processing is equipped with powerful inhibitory capacities. However, when the whole auditory modality is unattended, thalamo-cortical gating deficits may not manifest in the elderly. In contrast, ERP indices of longer-latency, stimulus-level inhibitory modulation appear to diminish with age.

  5. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  6. Do not throw out the baby with the bath water: choosing an effective baseline for a functional localizer of speech processing.

    PubMed

    Stoppelman, Nadav; Harpaz, Tamar; Ben-Shachar, Michal

    2013-05-01

    Speech processing engages multiple cortical regions in the temporal, parietal, and frontal lobes. Isolating speech-sensitive cortex in individual participants is of major clinical and scientific importance. This task is complicated by the fact that responses to sensory and linguistic aspects of speech are tightly packed within the posterior superior temporal cortex. In functional magnetic resonance imaging (fMRI), various baseline conditions are typically used in order to isolate speech-specific from basic auditory responses. Using a short, continuous sampling paradigm, we show that reversed ("backward") speech, a commonly used auditory baseline for speech processing, removes much of the speech responses in frontal and temporal language regions of adult individuals. On the other hand, signal correlated noise (SCN) serves as an effective baseline for removing primary auditory responses while maintaining strong signals in the same language regions. We show that the response to reversed speech in left inferior frontal gyrus decays significantly faster than the response to speech, thus suggesting that this response reflects bottom-up activation of speech analysis followed up by top-down attenuation once the signal is classified as nonspeech. The results overall favor SCN as an auditory baseline for speech processing.

  7. Analyzing pitch chroma and pitch height in the human brain.

    PubMed

    Warren, Jason D; Uppenkamp, Stefan; Patterson, Roy D; Griffiths, Timothy D

    2003-11-01

    The perceptual pitch dimensions of chroma and height have distinct representations in the human brain: chroma is represented in cortical areas anterior to primary auditory cortex, whereas height is represented posterior to primary auditory cortex.

  8. A Neural Code That Is Isometric to Vocal Output and Correlates with Its Sensory Consequences

    PubMed Central

    Vyssotski, Alexei L.; Stepien, Anna E.; Keller, Georg B.; Hahnloser, Richard H. R.

    2016-01-01

    What cortical inputs are provided to motor control areas while they drive complex learned behaviors? We study this question in the nucleus interface of the nidopallium (NIf), which is required for normal birdsong production and provides the main source of auditory input to HVC, the driver of adult song. In juvenile and adult zebra finches, we find that spikes in NIf projection neurons precede vocalizations by several tens of milliseconds and are insensitive to distortions of auditory feedback. We identify a local isometry between NIf output and vocalizations: quasi-identical notes produced in different syllables are preceded by highly similar NIf spike patterns. NIf multiunit firing during song precedes responses in auditory cortical neurons by about 50 ms, revealing delayed congruence between NIf spiking and a neural representation of auditory feedback. Our findings suggest that NIf codes for imminent acoustic events within vocal performance. PMID:27723764

  9. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain.

    PubMed

    Higgins, Irina; Stringer, Simon; Schnupp, Jan

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.

  10. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain

    PubMed Central

    Stringer, Simon

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable. PMID:28797034

  11. Cortical systems associated with covert music rehearsal.

    PubMed

    Langheim, Frederick J P; Callicott, Joseph H; Mattay, Venkata S; Duyn, Jeff H; Weinberger, Daniel R

    2002-08-01

    Musical representation and overt music production are necessarily complex cognitive phenomena. While overt musical performance may be observed and studied, the act of performance itself necessarily skews results toward the importance of primary sensorimotor and auditory cortices. However, imagined musical performance (IMP) represents a complex behavioral task involving components suited to exploring the physiological underpinnings of musical cognition in music performance without the sensorimotor and auditory confounds of overt performance. We mapped the blood oxygenation level-dependent fMRI activation response associated with IMP in experienced musicians independent of the piece imagined. IMP consistently activated supplementary motor and premotor areas, right superior parietal lobule, right inferior frontal gyrus, bilateral mid-frontal gyri, and bilateral lateral cerebellum in contrast with rest, in a manner distinct from fingertapping versus rest and passive listening to the same piece versus rest. These data implicate an associative network independent of primary sensorimotor and auditory activity, likely representing the cortical elements most intimately linked to music production.

  12. Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat.

    PubMed

    Razak, Khaleel A; Fuzessery, Zoltan M

    2015-10-01

    Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. © 2014 Wiley Periodicals, Inc.

  13. Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat

    PubMed Central

    Razak, Khaleel A.; Fuzessery, Zoltan M.

    2014-01-01

    Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations, or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. PMID:25142131

  14. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Hemisphere Differences in Speech-Sound Event-Related Potentials in Intensive Care Neonates: Associations and Predictive Value for Development in Infancy

    PubMed Central

    Maitre, Nathalie L.; Slaughter, James C.; Aschner, Judy L.; Key, Alexandra P.

    2014-01-01

    Neurodevelopmental delays in intensive care neonates are common but difficult to predict. In children, hemisphere differences in cortical processing of speech are predictive of cognitive performance. We hypothesized that hemisphere differences in auditory event-related potentials in intensive care neonates are predictive of neurodevelopment in infancy, even in those born preterm. Event-related potentials to speech sounds were prospectively recorded in 57 infants (gestational age 24–40 weeks) prior to discharge. The Developmental Assessment of Young Children was performed at 6 and 12 months. Hemisphere differences in mean amplitudes increased with postnatal age (P < .01) but not with gestational age. Greater hemisphere differences were associated with improved communication and cognitive scores at 6 and 12 months, but decreased in significance at 12 months after adjusting for socioeconomic and clinical factors. Auditory cortical responses can be used in intensive care neonates to help identify infants at higher risk for delays in infancy. PMID:23864588

  16. Cortical encoding and neurophysiological tracking of intensity and pitch cues signaling English stress patterns in native and nonnative speakers.

    PubMed

    Chung, Wei-Lun; Bidelman, Gavin M

    2016-01-01

    We examined cross-language differences in neural encoding and tracking of intensity and pitch cues signaling English stress patterns. Auditory mismatch negativities (MMNs) were recorded in English and Mandarin listeners in response to contrastive English pseudowords whose primary stress occurred either on the first or second syllable (i.e., "nocTICity" vs. "NOCticity"). The contrastive syllable stress elicited two consecutive MMNs in both language groups, but English speakers demonstrated larger responses to stress patterns than Mandarin speakers. Correlations between the amplitude of ERPs and continuous changes in the running intensity and pitch of speech assessed how well each language group's brain activity tracked these salient acoustic features of lexical stress. We found that English speakers' neural responses tracked intensity changes in speech more closely than Mandarin speakers (higher brain-acoustic correlation). Findings demonstrate more robust and precise processing of English stress (intensity) patterns in early auditory cortical responses of native relative to nonnative speakers. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Propofol disrupts functional interactions between sensory and high-order processing of auditory verbal memory.

    PubMed

    Liu, Xiaolin; Lauer, Kathryn K; Ward, Barney D; Rao, Stephen M; Li, Shi-Jiang; Hudetz, Anthony G

    2012-10-01

    Current theories suggest that disrupting cortical information integration may account for the mechanism of general anesthesia in suppressing consciousness. Human cognitive operations take place in hierarchically structured neural organizations in the brain. The process of low-order neural representation of sensory stimuli becoming integrated in high-order cortices is also known as cognitive binding. Combining neuroimaging, cognitive neuroscience, and anesthetic manipulation, we examined how cognitive networks involved in auditory verbal memory are maintained in wakefulness, disrupted in propofol-induced deep sedation, and re-established in recovery. Inspired by the notion of cognitive binding, an functional magnetic resonance imaging-guided connectivity analysis was utilized to assess the integrity of functional interactions within and between different levels of the task-defined brain regions. Task-related responses persisted in the primary auditory cortex (PAC), but vanished in the inferior frontal gyrus (IFG) and premotor areas in deep sedation. For connectivity analysis, seed regions representing sensory and high-order processing of the memory task were identified in the PAC and IFG. Propofol disrupted connections from the PAC seed to the frontal regions and thalamus, but not the connections from the IFG seed to a set of widely distributed brain regions in the temporal, frontal, and parietal lobes (with exception of the PAC). These later regions have been implicated in mediating verbal comprehension and memory. These results suggest that propofol disrupts cognition by blocking the projection of sensory information to high-order processing networks and thus preventing information integration. Such findings contribute to our understanding of anesthetic mechanisms as related to information and integration in the brain. Copyright © 2011 Wiley Periodicals, Inc.

  18. Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.

    PubMed

    Kreifelts, Benjamin; Ethofer, Thomas; Grodd, Wolfgang; Erb, Michael; Wildgruber, Dirk

    2007-10-01

    In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.

  19. Storing maternal memories: Hypothesizing an interaction of experience and estrogen on sensory cortical plasticity to learn infant cues

    PubMed Central

    Banerjee, Sunayana B.; Liu, Robert C.

    2013-01-01

    Much of the literature on maternal behavior has focused on the role of infant experience and hormones in a canonical subcortical circuit for maternal motivation and maternal memory. Although early studies demonstrated that the cerebral cortex also plays a significant role in maternal behaviors, little has been done to explore what that role may be. Recent work though has provided evidence that the cortex, particularly sensory cortices, contains correlates of sensory memories of infant cues, consistent with classical studies of experience-dependent sensory cortical plasticity in non-maternal paradigms. By reviewing the literature from both the maternal behavior and sensory cortical plasticity fields, focusing on the auditory modality, we hypothesize that maternal hormones (predominantly estrogen) may act to prime auditory cortical neurons for a longer-lasting neural trace of infant vocal cues, thereby facilitating recognition and discrimination. This could then more efficiently activate the subcortical circuit to elicit and sustain maternal behavior. PMID:23916405

  20. Dynamics of hemispheric dominance for language assessed by magnetoencephalographic imaging.

    PubMed

    Findlay, Anne M; Ambrose, Josiah B; Cahn-Weiner, Deborah A; Houde, John F; Honma, Susanne; Hinkley, Leighton B N; Berger, Mitchel S; Nagarajan, Srikantan S; Kirsch, Heidi E

    2012-05-01

    The goal of the current study was to examine the dynamics of language lateralization using magnetoencephalographic (MEG) imaging, to determine the sensitivity and specificity of MEG imaging, and to determine whether MEG imaging can become a viable alternative to the intracarotid amobarbital procedure (IAP), the current gold standard for preoperative language lateralization in neurosurgical candidates. MEG was recorded during an auditory verb generation task and imaging analysis of oscillatory activity was initially performed in 21 subjects with epilepsy, brain tumor, or arteriovenous malformation who had undergone IAP and MEG. Time windows and brain regions of interest that best discriminated between IAP-determined left or right dominance for language were identified. Parameters derived in the retrospective analysis were applied to a prospective cohort of 14 patients and healthy controls. Power decreases in the beta frequency band were consistently observed following auditory stimulation in inferior frontal, superior temporal, and parietal cortices; similar power decreases were also seen in inferior frontal cortex prior to and during overt verb generation. Language lateralization was clearly observed to be a dynamic process that is bilateral for several hundred milliseconds during periods of auditory perception and overt speech production. Correlation with the IAP was seen in 13 of 14 (93%) prospective patients, with the test demonstrating a sensitivity of 100% and specificity of 92%. Our results demonstrate excellent correlation between MEG imaging findings and the IAP for language lateralization, and provide new insights into the spatiotemporal dynamics of cortical speech processing. Copyright © 2012 American Neurological Association.

  1. Sound envelope encoding in the auditory cortex revealed by neuromagnetic responses in the theta to gamma frequency bands.

    PubMed

    Miyazaki, Takahiro; Thompson, Jessica; Fujioka, Takako; Ross, Bernhard

    2013-04-19

    Amplitude fluctuations of natural sounds carry multiple types of information represented at different time scales, such as syllables and voice pitch in speech. However, it is not well understood how such amplitude fluctuations at different time scales are processed in the brain. In the present study we investigated the effect of the stimulus rate on the cortical evoked responses using magnetoencephalography (MEG). We used a two-tone complex sound, whose envelope fluctuated at the difference frequency and induced an acoustic beat sensation. When the beat rate was continuously swept between 3Hz and 60Hz, auditory evoked response showed distinct transient waves at slow rates, while at fast rates continuous sinusoidal oscillations similar to the auditory steady-state response (ASSR) were observed. We further derived temporal modulation transfer functions (TMTF) from amplitudes of the transient responses and from the ASSR. The results identified two critical rates of 12.5Hz and 25Hz, at which consecutive transient responses overlapped with each other. These stimulus rates roughly corresponded to the rates at which the perceptual quality of the sound envelope is known to change. Low rates (> 10Hz) are perceived as loudness fluctuation, medium rates as acoustical flutter, and rates above 25Hz as roughness. We conclude that these results reflect cortical processes that integrate successive acoustic events at different time scales for extracting complex features of natural sound. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Temporal characteristics of audiovisual information processing.

    PubMed

    Fuhrmann Alpert, Galit; Hein, Grit; Tsai, Nancy; Naumer, Marcus J; Knight, Robert T

    2008-05-14

    In complex natural environments, auditory and visual information often have to be processed simultaneously. Previous functional magnetic resonance imaging (fMRI) studies focused on the spatial localization of brain areas involved in audiovisual (AV) information processing, but the temporal characteristics of AV information flow in these regions remained unclear. In this study, we used fMRI and a novel information-theoretic approach to study the flow of AV sensory information. Subjects passively perceived sounds and images of objects presented either alone or simultaneously. Applying the measure of mutual information, we computed for each voxel the latency in which the blood oxygenation level-dependent signal had the highest information content about the preceding stimulus. The results indicate that, after AV stimulation, the earliest informative activity occurs in right Heschl's gyrus, left primary visual cortex, and the posterior portion of the superior temporal gyrus, which is known as a region involved in object-related AV integration. Informative activity in the anterior portion of superior temporal gyrus, middle temporal gyrus, right occipital cortex, and inferior frontal cortex was found at a later latency. Moreover, AV presentation resulted in shorter latencies in multiple cortical areas compared with isolated auditory or visual presentation. The results provide evidence for bottom-up processing from primary sensory areas into higher association areas during AV integration in humans and suggest that AV presentation shortens processing time in early sensory cortices.

  3. Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.

    PubMed

    Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent

    2010-07-01

    Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  4. Aphasia and Auditory Processing after Stroke through an International Classification of Functioning, Disability and Health Lens

    PubMed Central

    Purdy, Suzanne C.; Wanigasekara, Iruni; Cañete, Oscar M.; Moore, Celia; McCann, Clare M.

    2016-01-01

    Aphasia is an acquired language impairment affecting speaking, listening, reading, and writing. Aphasia occurs in about a third of patients who have ischemic stroke and significantly affects functional recovery and return to work. Stroke is more common in older individuals but also occurs in young adults and children. Because people experiencing a stroke are typically aged between 65 and 84 years, hearing loss is common and can potentially interfere with rehabilitation. There is some evidence for increased risk and greater severity of sensorineural hearing loss in the stroke population and hence it has been recommended that all people surviving a stroke should have a hearing test. Auditory processing difficulties have also been reported poststroke. The International Classification of Functioning, Disability and Health (ICF) can be used as a basis for describing the effect of aphasia, hearing loss, and auditory processing difficulties on activities and participation. Effects include reduced participation in activities outside the home such as work and recreation and difficulty engaging in social interaction and communicating needs. A case example of a young man (M) in his 30s who experienced a left-hemisphere ischemic stroke is presented. M has normal hearing sensitivity but has aphasia and auditory processing difficulties based on behavioral and cortical evoked potential measures. His principal goal is to return to work. Although auditory processing difficulties (and hearing loss) are acknowledged in the literature, clinical protocols typically do not specify routine assessment. The literature and the case example presented here suggest a need for further research in this area and a possible change in practice toward more routine assessment of auditory function post-stroke. PMID:27489401

  5. The time course of auditory-visual processing of speech and body actions: evidence for the simultaneous activation of an extended neural network for semantic processing.

    PubMed

    Meyer, Georg F; Harrison, Neil R; Wuerger, Sophie M

    2013-08-01

    An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    PubMed

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as neurophysiological substrates for auditory prediction. Tinnitus has been modeled as an auditory object which may demonstrate incomplete processing during auditory scene analysis resulting in tinnitus salience and therefore difficulty in habituation. Within the electrophysiological domain, there is currently mixed evidence regarding oscillatory band changes in tinnitus. There are theoretical proposals for a relationship between prediction error and tinnitus but few published empirical studies. American Academy of Audiology.

  7. An anatomical and functional topography of human auditory cortical areas

    PubMed Central

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions. PMID:25120426

  8. One Year of Musical Training Affects Development of Auditory Cortical-Evoked Fields in Young Children

    ERIC Educational Resources Information Center

    Fujioka, Takako; Ross, Bernhard; Kakigi, Ryusuke; Pantev, Christo; Trainor, Laurel J.

    2006-01-01

    Auditory evoked responses to a violin tone and a noise-burst stimulus were recorded from 4- to 6-year-old children in four repeated measurements over a 1-year period using magnetoencephalography (MEG). Half of the subjects participated in musical lessons throughout the year; the other half had no music lessons. Auditory evoked magnetic fields…

  9. Deconvolution of magnetic acoustic change complex (mACC).

    PubMed

    Bardy, Fabrice; McMahon, Catherine M; Yau, Shu Hui; Johnson, Blake W

    2014-11-01

    The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.

  10. New Perspectives on Assessing Amplification Effects

    PubMed Central

    Souza, Pamela E.; Tremblay, Kelly L.

    2006-01-01

    Clinicians have long been aware of the range of performance variability with hearing aids. Despite improvements in technology, there remain many instances of well-selected and appropriately fitted hearing aids whereby the user reports minimal improvement in speech understanding. This review presents a multistage framework for understanding how a hearing aid affects performance. Six stages are considered: (1) acoustic content of the signal, (2) modification of the signal by the hearing aid, (3) interaction between sound at the output of the hearing aid and the listener's ear, (4) integrity of the auditory system, (5) coding of available acoustic cues by the listener's auditory system, and (6) correct identification of the speech sound. Within this framework, this review describes methodology and research on 2 new assessment techniques: acoustic analysis of speech measured at the output of the hearing aid and auditory evoked potentials recorded while the listener wears hearing aids. Acoustic analysis topics include the relationship between conventional probe microphone tests and probe microphone measurements using speech, appropriate procedures for such tests, and assessment of signal-processing effects on speech acoustics and recognition. Auditory evoked potential topics include an overview of physiologic measures of speech processing and the effect of hearing loss and hearing aids on cortical auditory evoked potential measurements in response to speech. Finally, the clinical utility of these procedures is discussed. PMID:16959734

  11. Thalamic input to auditory cortex is locally heterogeneous but globally tonotopic

    PubMed Central

    Vasquez-Lopez, Sebastian A; Weissenberger, Yves; Lohse, Michael; Keating, Peter; King, Andrew J

    2017-01-01

    Topographic representation of the receptor surface is a fundamental feature of sensory cortical organization. This is imparted by the thalamus, which relays information from the periphery to the cortex. To better understand the rules governing thalamocortical connectivity and the origin of cortical maps, we used in vivo two-photon calcium imaging to characterize the properties of thalamic axons innervating different layers of mouse auditory cortex. Although tonotopically organized at a global level, we found that the frequency selectivity of individual thalamocortical axons is surprisingly heterogeneous, even in layers 3b/4 of the primary cortical areas, where the thalamic input is dominated by the lemniscal projection. We also show that thalamocortical input to layer 1 includes collaterals from axons innervating layers 3b/4 and is largely in register with the main input targeting those layers. Such locally varied thalamocortical projections may be useful in enabling rapid contextual modulation of cortical frequency representations. PMID:28891466

  12. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    PubMed

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  13. Plasticity of white matter connectivity in phonetics experts.

    PubMed

    Vandermosten, Maaike; Price, Cathy J; Golestani, Narly

    2016-09-01

    Phonetics experts are highly trained to analyze and transcribe speech, both with respect to faster changing, phonetic features, and to more slowly changing, prosodic features. Previously we reported that, compared to non-phoneticians, phoneticians had greater local brain volume in bilateral auditory cortices and the left pars opercularis of Broca's area, with training-related differences in the grey-matter volume of the left pars opercularis in the phoneticians group (Golestani et al. 2011). In the present study, we used diffusion MRI to examine white matter microstructure, indexed by fractional anisotropy, in (1) the long segment of arcuate fasciculus (AF_long), which is a well-known language tract that connects Broca's area, including left pars opercularis, to the temporal cortex, and in (2) the fibers arising from the auditory cortices. Most of these auditory fibers belong to three validated language tracts, namely to the AF_long, the posterior segment of the arcuate fasciculus and the middle longitudinal fasciculus. We found training-related differences in phoneticians in left AF_long, as well as group differences relative to non-experts in the auditory fibers (including the auditory fibers belonging to the left AF_long). Taken together, the results of both studies suggest that grey matter structural plasticity arising from phonetic transcription training in Broca's area is accompanied by changes to the white matter fibers connecting this very region to the temporal cortex. Our findings suggest expertise-related changes in white matter fibers connecting fronto-temporal functional hubs that are important for phonetic processing. Further studies can pursue this hypothesis by examining the dynamics of these expertise related grey and white matter changes as they arise during phonetic training.

  14. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence.

    PubMed

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective.

  15. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    PubMed Central

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  16. Right-left asymmetry in the cortical processing of sounds for social communication vs. navigation in mustached bats.

    PubMed

    Kanwal, Jagmeet S

    2012-01-01

    In the Doppler-shifted constant frequency processing area in the primary auditory cortex of mustached bats, Pteronotus parnellii, neurons respond to both social calls and to echolocation signals. This multifunctional nature of cortical neurons creates a paradox for simultaneous processing of two behaviorally distinct categories of sound. To test the possibility of a stimulus-specific hemispheric bias, single-unit responses were obtained to both types of sounds, calls and pulse-echo tone pairs, from the right and left auditory cortex. Neurons on the left exhibited only slightly higher peak response magnitudes for their respective best calls, but they showed a significantly higher sensitivity (lower response thresholds) to calls than neurons on the right. On average, call-to-tone response ratios were significantly higher for neurons on the left than for those on the right. Neurons on the right responded significantly more strongly to pulse-echo tone pairs than those on the left. Overall, neurons in males responded to pulse-echo tone pairs with a much higher spike count compared to females, but this difference was less pronounced for calls. Multidimensional scaling of call responses yielded a segregated representation of call types only on the left. These data establish for the first time, a behaviorally directed right-left asymmetry at the level of single cortical neurons. It is proposed that a lateralized cortex emerges from multiparametric integration (e.g. combination-sensitivity) within a neuron and inhibitory interactions between neurons that come into play during the processing of complex sounds. © 2011 The Author. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  17. Two-Photon Functional Imaging of the Auditory Cortex in Behaving Mice: From Neural Networks to Single Spines.

    PubMed

    Li, Ruijie; Wang, Meng; Yao, Jiwei; Liang, Shanshan; Liao, Xiang; Yang, Mengke; Zhang, Jianxiong; Yan, Junan; Jia, Hongbo; Chen, Xiaowei; Li, Xingyi

    2018-01-01

    In vivo two-photon Ca 2+ imaging is a powerful tool for recording neuronal activities during perceptual tasks and has been increasingly applied to behaving animals for acute or chronic experiments. However, the auditory cortex is not easily accessible to imaging because of the abundant temporal muscles, arteries around the ears and their lateral locations. Here, we report a protocol for two-photon Ca 2+ imaging in the auditory cortex of head-fixed behaving mice. By using a custom-made head fixation apparatus and a head-rotated fixation procedure, we achieved two-photon imaging and in combination with targeted cell-attached recordings of auditory cortical neurons in behaving mice. Using synthetic Ca 2+ indicators, we recorded the Ca 2+ transients at multiple scales, including neuronal populations, single neurons, dendrites and single spines, in auditory cortex during behavior. Furthermore, using genetically encoded Ca 2+ indicators (GECIs), we monitored the neuronal dynamics over days throughout the process of associative learning. Therefore, we achieved two-photon functional imaging at multiple scales in auditory cortex of behaving mice, which extends the tool box for investigating the neural basis of audition-related behaviors.

  18. Two-Photon Functional Imaging of the Auditory Cortex in Behaving Mice: From Neural Networks to Single Spines

    PubMed Central

    Li, Ruijie; Wang, Meng; Yao, Jiwei; Liang, Shanshan; Liao, Xiang; Yang, Mengke; Zhang, Jianxiong; Yan, Junan; Jia, Hongbo; Chen, Xiaowei; Li, Xingyi

    2018-01-01

    In vivo two-photon Ca2+ imaging is a powerful tool for recording neuronal activities during perceptual tasks and has been increasingly applied to behaving animals for acute or chronic experiments. However, the auditory cortex is not easily accessible to imaging because of the abundant temporal muscles, arteries around the ears and their lateral locations. Here, we report a protocol for two-photon Ca2+ imaging in the auditory cortex of head-fixed behaving mice. By using a custom-made head fixation apparatus and a head-rotated fixation procedure, we achieved two-photon imaging and in combination with targeted cell-attached recordings of auditory cortical neurons in behaving mice. Using synthetic Ca2+ indicators, we recorded the Ca2+ transients at multiple scales, including neuronal populations, single neurons, dendrites and single spines, in auditory cortex during behavior. Furthermore, using genetically encoded Ca2+ indicators (GECIs), we monitored the neuronal dynamics over days throughout the process of associative learning. Therefore, we achieved two-photon functional imaging at multiple scales in auditory cortex of behaving mice, which extends the tool box for investigating the neural basis of audition-related behaviors. PMID:29740289

  19. Attention distributed across sensory modalities enhances perceptual performance

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2012-01-01

    This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811

  20. Acute Inactivation of Primary Auditory Cortex Causes a Sound Localisation Deficit in Ferrets

    PubMed Central

    Wood, Katherine C.; Town, Stephen M.; Atilgan, Huriye; Jones, Gareth P.

    2017-01-01

    The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context. PMID:28099489

  1. Probing sensorimotor integration during musical performance.

    PubMed

    Furuya, Shinichi; Furukawa, Yuta; Uehara, Kazumasa; Oku, Takanori

    2018-03-10

    An integration of afferent sensory information from the visual, auditory, and proprioceptive systems into execution and update of motor programs plays crucial roles in control and acquisition of skillful sequential movements in musical performance. However, conventional behavioral and neurophysiological techniques that have been applied to study simplistic motor behaviors limit elucidating online sensorimotor integration processes underlying skillful musical performance. Here, we propose two novel techniques that were developed to investigate the roles of auditory and proprioceptive feedback in piano performance. First, a closed-loop noninvasive brain stimulation system that consists of transcranial magnetic stimulation, a motion sensor, and a microcomputer enabled to assess time-varying cortical processes subserving auditory-motor integration during piano playing. Second, a force-field system capable of manipulating the weight of a piano key allowed for characterizing movement adaptation based on the feedback obtained, which can shed light on the formation of an internal representation of the piano. Results of neurophysiological and psychophysics experiments provided evidence validating these systems as effective means for disentangling computational and neural processes of sensorimotor integration in musical performance. © 2018 New York Academy of Sciences.

  2. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  3. Reduced connectivity of the auditory cortex in patients with auditory hallucinations: a resting state functional magnetic resonance imaging study.

    PubMed

    Gavrilescu, M; Rossell, S; Stuart, G W; Shea, T L; Innes-Brown, H; Henshall, K; McKay, C; Sergejew, A A; Copolov, D; Egan, G F

    2010-07-01

    Previous research has reported auditory processing deficits that are specific to schizophrenia patients with a history of auditory hallucinations (AH). One explanation for these findings is that there are abnormalities in the interhemispheric connectivity of auditory cortex pathways in AH patients; as yet this explanation has not been experimentally investigated. We assessed the interhemispheric connectivity of both primary (A1) and secondary (A2) auditory cortices in n=13 AH patients, n=13 schizophrenia patients without auditory hallucinations (non-AH) and n=16 healthy controls using functional connectivity measures from functional magnetic resonance imaging (fMRI) data. Functional connectivity was estimated from resting state fMRI data using regions of interest defined for each participant based on functional activation maps in response to passive listening to words. Additionally, stimulus-induced responses were regressed out of the stimulus data and the functional connectivity was estimated for the same regions to investigate the reliability of the estimates. AH patients had significantly reduced interhemispheric connectivity in both A1 and A2 when compared with non-AH patients and healthy controls. The latter two groups did not show any differences in functional connectivity. Further, this pattern of findings was similar across the two datasets, indicating the reliability of our estimates. These data have identified a trait deficit specific to AH patients. Since this deficit was characterized within both A1 and A2 it is expected to result in the disruption of multiple auditory functions, for example, the integration of basic auditory information between hemispheres (via A1) and higher-order language processing abilities (via A2).

  4. Brainstem timing: implications for cortical processing and literacy.

    PubMed

    Banai, Karen; Nicol, Trent; Zecker, Steven G; Kraus, Nina

    2005-10-26

    The search for a unique biological marker of language-based learning disabilities has so far yielded inconclusive findings. Previous studies have shown a plethora of auditory processing deficits in learning disabilities at both the perceptual and physiological levels. In this study, we investigated the association among brainstem timing, cortical processing of stimulus differences, and literacy skills. To that end, brainstem timing and cortical sensitivity to acoustic change [mismatch negativity (MMN)] were measured in a group of children with learning disabilities and normal-learning children. The learning-disabled (LD) group was further divided into two subgroups with normal and abnormal brainstem timing. MMNs, literacy, and cognitive abilities were compared among the three groups. LD individuals with abnormal brainstem timing were more likely to show reduced processing of acoustic change at the cortical level compared with both normal-learning individuals and LD individuals with normal brainstem timing. This group was also characterized by a more severe form of learning disability manifested by poorer reading, listening comprehension, and general cognitive ability. We conclude that abnormal brainstem timing in learning disabilities is related to higher incidence of reduced cortical sensitivity to acoustic change and to deficient literacy skills. These findings suggest that abnormal brainstem timing may serve as a reliable marker of a subgroup of individuals with learning disabilities. They also suggest that faulty mechanisms of neural timing at the brainstem may be the biological basis of malfunction in this group.

  5. Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.

    PubMed

    Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza

    2012-02-01

    Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.

  6. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    PubMed

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  7. Central Auditory Development: Evidence from CAEP Measurements in Children Fit with Cochlear Implants

    ERIC Educational Resources Information Center

    Dorman, Michael F.; Sharma, Anu; Gilley, Phillip; Martin, Kathryn; Roland, Peter

    2007-01-01

    In normal-hearing children the latency of the P1 component of the cortical evoked response to sound varies as a function of age and, thus, can be used as a biomarker for maturation of central auditory pathways. We assessed P1 latency in 245 congenitally deaf children fit with cochlear implants following various periods of auditory deprivation. If…

  8. A lexical semantic hub for heteromodal naming in middle fusiform gyrus.

    PubMed

    Forseth, Kiefer James; Kadipasaoglu, Cihan Mehmet; Conner, Christopher Richard; Hickok, Gregory; Knight, Robert Thomas; Tandon, Nitin

    2018-07-01

    Semantic memory underpins our understanding of objects, people, places, and ideas. Anomia, a disruption of semantic memory access, is the most common residual language disturbance and is seen in dementia and following injury to temporal cortex. While such anomia has been well characterized by lesion symptom mapping studies, its pathophysiology is not well understood. We hypothesize that inputs to the semantic memory system engage a specific heteromodal network hub that integrates lexical retrieval with the appropriate semantic content. Such a network hub has been proposed by others, but has thus far eluded precise spatiotemporal delineation. This limitation in our understanding of semantic memory has impeded progress in the treatment of anomia. We evaluated the cortical structure and dynamics of the lexical semantic network in driving speech production in a large cohort of patients with epilepsy using electrocorticography (n = 64), functional MRI (n = 36), and direct cortical stimulation (n = 30) during two generative language processes that rely on semantic knowledge: visual picture naming and auditory naming to definition. Each task also featured a non-semantic control condition: scrambled pictures and reversed speech, respectively. These large-scale data of the left, language-dominant hemisphere uniquely enable convergent, high-resolution analyses of neural mechanisms characterized by rapid, transient dynamics with strong interactions between distributed cortical substrates. We observed three stages of activity during both visual picture naming and auditory naming to definition that were serially organized: sensory processing, lexical semantic processing, and articulation. Critically, the second stage was absent in both the visual and auditory control conditions. Group activity maps from both electrocorticography and functional MRI identified heteromodal responses in middle fusiform gyrus, intraparietal sulcus, and inferior frontal gyrus; furthermore, the spectrotemporal profiles of these three regions revealed coincident activity preceding articulation. Only in the middle fusiform gyrus did direct cortical stimulation disrupt both naming tasks while still preserving the ability to repeat sentences. These convergent data strongly support a model in which a distinct neuroanatomical substrate in middle fusiform gyrus provides access to object semantic information. This under-appreciated locus of semantic processing is at risk in resections for temporal lobe epilepsy as well as in trauma and strokes that affect the inferior temporal cortex-it may explain the range of anomic states seen in these conditions. Further characterization of brain network behaviour engaging this region in both healthy and diseased states will expand our understanding of semantic memory and further development of therapies directed at anomia.

  9. Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns

    PubMed Central

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Valente, Giancarlo; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    2017-01-01

    Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2–4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice). PMID:28420788

  10. Human Evoked Cortical Activity to Silent Gaps in Noise: Effects of Age, Attention, and Cortical Processing Speed

    PubMed Central

    Harris, Kelly C.; Wilson, Sara; Eckert, Mark A.; Dubno, Judy R.

    2011-01-01

    Objectives The goal of this study was to examine the degree to which age-related differences in early or automatic levels of auditory processing and attention-related processes explain age-related differences in auditory temporal processing. We hypothesized that age-related differences in attention and cognition compound age-related differences at automatic levels of processing, contributing to the robust age effects observed during challenging listening tasks. Design We examined age-related and individual differences in cortical event-related potential (ERP) amplitudes and latencies, processing speed, and gap detection from twenty-five younger and twenty-five older adults with normal hearing. ERPs were elicited by brief silent periods (gaps) in an otherwise continuous broadband noise and were measured under two listening conditions, passive and active. During passive listening, participants ignored the stimulus and read quietly. During active listening, participants button pressed each time they detected a gap. Gap detection (percent detected) was calculated for each gap duration during active listening (3, 6, 9, 12 and 15 ms). Processing speed was assessed using the Purdue Pegboard test and the Connections Test. Repeated measures ANOVAs assessed effects of age on gap detection, processing speed, and ERP amplitudes and latencies. An “attention modulation” construct was created using linear regression to examine the effects of attention while controlling for age-related differences in auditory processing. Pearson correlation analyses assessed the extent to which attention modulation, ERPs, and processing speed predicted behavioral gap detection. Results: Older adults had significantly poorer gap detection and slower processing speed than younger adults. Even after adjusting for poorer gap detection, the neurophysiological response to gap onset was atypical in older adults with reduced P2 amplitudes and virtually absent N2 responses. Moreover, individual differences in attention modulation of P2 response latencies and N2 amplitudes predicted gap detection and processing speed in older adults. That is, older adults with P2 latencies that decreased and N2 amplitudes that increased with active listening had faster processing speed and better gap detection than those older adults whose P2 latencies increased and N2 amplitudes decreased with attention Conclusions Results from the current study are broadly consistent with previous findings that older adults exhibit significantly poorer gap detection than younger adults in challenging tasks. Even after adjusting for poorer gap detection, older and younger adults showed robust differences in their electrophysiological responses to sound offset. Furthermore, the degree to which attention modulated the ERP was associated with individual variation in measures of processing speed and gap detection. Taken together, these results suggests an age-related deficit in early or automatic levels of auditory temporal processing and that some older adults may be less able to compensate for declines in processing by attending to the stimulus. These results extend our previous findings and support the hypothesis that age-related differences in cognitive or attention-related processing, including processing speed, contribute to an age-related decrease in gap detection. PMID:22374321

  11. Long latency auditory evoked potentials in children with cochlear implants: systematic review.

    PubMed

    Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Matas, Carla Gentile; Carvalho, Ana Claudia Martinho de

    2013-11-25

    The aim of this study was to analyze the findings on Cortical Auditory Evoked Potentials in children with cochlear implant through a systematic literature review. After formulation of research question and search of studies in four data bases with the following descriptors: electrophysiology (eletrofisiologia), cochlear implantation (implante coclear), child (criança), neuronal plasticity (plasticidade neuronal) and audiology (audiologia), were selected articles (original and complete) published between 2002 and 2013 in Brazilian Portuguese or English. A total of 208 studies were found; however, only 13 contemplated the established criteria and were further analyzed; was made data extraction for analysis of methodology and content of the studies. The results described suggest rapid changes in P1 component of Cortical Auditory Evoked Potentials in children with cochlear implants. Although there are few studies on the theme, cochlear implant has been shown to produce effective changes in central auditory path ways especially in children implanted before 3 years and 6 months of age.

  12. PTEN regulation of local and long-range connections in mouse auditory cortex

    PubMed Central

    Xiong, Qiaojie; Oviedo, Hysell V; Trotman, Lloyd C; Zador, Anthony M

    2012-01-01

    Autism Spectrum Disorders (ASDs) are highly heritable developmental disorders caused by a heterogeneous collection of genetic lesions. Here we use a mouse model to study the effect on cortical connectivity of disrupting the ASD candidate gene PTEN. Through Cre-mediated recombination we conditionally knocked out PTEN expression in a subset of auditory cortical neurons. Analysis of long range connectivity using channelrhodopsin-2 (ChR2) revealed that the strength of synaptic inputs from both the contralateral auditory cortex and from the thalamus onto PTEN-cko neurons was enhanced compared with nearby neurons with normal PTEN expression. Laser scanning photostimulation (LSPS) showed that local inputs onto PTEN-cko neurons in the auditory cortex were similarly enhanced. The hyperconnectivity caused by PTEN-cko could be blocked by rapamycin, a specific inhibitor of the PTEN downstream molecule mTORC1. Together our results suggest that local and long-range hyperconnectivity may constitute a physiological basis for the effects of mutations in PTEN and possibly other ASD candidate genes. PMID:22302806

  13. Tracing the neural basis of auditory entrainment.

    PubMed

    Lehmann, Alexandre; Arias, Diana Jimena; Schönwiesner, Marc

    2016-11-19

    Neurons in the auditory cortex synchronize their responses to temporal regularities in sound input. This coupling or "entrainment" is thought to facilitate beat extraction and rhythm perception in temporally structured sounds, such as music. As a consequence of such entrainment, the auditory cortex responds to an omitted (silent) sound in a regular sequence. Although previous studies suggest that the auditory brainstem frequency-following response (FFR) exhibits some of the beat-related effects found in the cortex, it is unknown whether omissions of sounds evoke a brainstem response. We simultaneously recorded cortical and brainstem responses to isochronous and irregular sequences of consonant-vowel syllable /da/ that contained sporadic omissions. The auditory cortex responded strongly to omissions, but we found no evidence of evoked responses to omitted stimuli from the auditory brainstem. However, auditory brainstem responses in the isochronous sound sequence were more consistent across trials than in the irregular sequence. These results indicate that the auditory brainstem faithfully encodes short-term acoustic properties of a stimulus and is sensitive to sequence regularity, but does not entrain to isochronous sequences sufficiently to generate overt omission responses, even for sequences that evoke such responses in the cortex. These findings add to our understanding of the processing of sound regularities, which is an important aspect of human cognitive abilities like rhythm, music and speech perception. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Origins of task-specific sensory-independent organization in the visual and auditory brain: neuroscience evidence, open questions and clinical implications.

    PubMed

    Heimler, Benedetta; Striem-Amit, Ella; Amedi, Amir

    2015-12-01

    Evidence of task-specific sensory-independent (TSSI) plasticity from blind and deaf populations has led to a better understanding of brain organization. However, the principles determining the origins of this plasticity remain unclear. We review recent data suggesting that a combination of the connectivity bias and sensitivity to task-distinctive features might account for TSSI plasticity in the sensory cortices as a whole, from the higher-order occipital/temporal cortices to the primary sensory cortices. We discuss current theories and evidence, open questions and related predictions. Finally, given the rapid progress in visual and auditory restoration techniques, we address the crucial need to develop effective rehabilitation approaches for sensory recovery. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    PubMed

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.

  16. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes.

    PubMed

    Maggu, Akshay R; Liu, Fang; Antoniou, Mark; Wong, Patrick C M

    2016-01-01

    Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators ) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change.

  17. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes

    PubMed Central

    Maggu, Akshay R.; Liu, Fang; Antoniou, Mark; Wong, Patrick C. M.

    2016-01-01

    Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change. PMID:28066218

  18. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    PubMed

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex. Copyright © 2017 Millman et al.

  19. Negative BOLD in sensory cortices during verbal memory: a component in generating internal representations?

    PubMed

    Azulay, Haim; Striem, Ella; Amedi, Amir

    2009-05-01

    People tend to close their eyes when trying to retrieve an event or a visual image from memory. However the brain mechanisms behind this phenomenon remain poorly understood. Recently, we showed that during visual mental imagery, auditory areas show a much more robust deactivation than during visual perception. Here we ask whether this is a special case of a more general phenomenon involving retrieval of intrinsic, internally stored information, which would result in crossmodal deactivations in other sensory cortices which are irrelevant to the task at hand. To test this hypothesis, a group of 9 sighted individuals were scanned while performing a memory retrieval task for highly abstract words (i.e., with low imaginability scores). We also scanned a group of 10 congenitally blind, which by definition do not have any visual imagery per se. In sighted subjects, both auditory and visual areas were robustly deactivated during memory retrieval, whereas in the blind the auditory cortex was deactivated while visual areas, shown previously to be relevant for this task, presented a positive BOLD signal. These results suggest that deactivation may be most prominent in task-irrelevant sensory cortices whenever there is a need for retrieval or manipulation of internally stored representations. Thus, there is a task-dependent balance of activation and deactivation that might allow maximization of resources and filtering out of non relevant information to enable allocation of attention to the required task. Furthermore, these results suggest that the balance between positive and negative BOLD might be crucial to our understanding of a large variety of intrinsic and extrinsic tasks including high-level cognitive functions, sensory processing and multisensory integration.

  20. Auditory cortical activation and plasticity after cochlear implantation measured by PET using fluorodeoxyglucose.

    PubMed

    Łukaszewicz-Moszyńska, Zuzanna; Lachowska, Magdalena; Niemczyk, Kazimierz

    2014-01-01

    The purpose of this study was to evaluate possible relationships between duration of cochlear implant use and results of positron emission tomography (PET) measurements in the temporal lobes performed while subjects listened to speech stimuli. Other aspects investigated were whether implantation side impacts significantly on cortical representations of functions related to understanding speech (ipsi- or contralateral to the implanted side) and whether any correlation exists between cortical activation and speech therapy results. Objective cortical responses to acoustic stimulation were measured, using PET, in nine cochlear implant patients (age range: 15 to 50 years). All the patients suffered from bilateral deafness, were right-handed, and had no additional neurological deficits. They underwent PET imaging three times: immediately after the first fitting of the speech processor (activation of the cochlear implant), and one and two years later. A tendency towards increasing levels of activation in areas of the primary and secondary auditory cortex on the left side of the brain was observed. There was no clear effect of the side of implantation (left or right) on the degree of cortical activation in the temporal lobe. However, the PET results showed a correlation between degree of cortical activation and speech therapy results.

  1. Auditory cortical activation and plasticity after cochlear implantation measured by PET using fluorodeoxyglucose

    PubMed Central

    Łukaszewicz-Moszyńska, Zuzanna; Lachowska, Magdalena; Niemczyk, Kazimierz

    2014-01-01

    Summary The purpose of this study was to evaluate possible relationships between duration of cochlear implant use and results of positron emission tomography (PET) measurements in the temporal lobes performed while subjects listened to speech stimuli. Other aspects investigated were whether implantation side impacts significantly on cortical representations of functions related to understanding speech (ipsi- or contralateral to the implanted side) and whether any correlation exists between cortical activation and speech therapy results. Objective cortical responses to acoustic stimulation were measured, using PET, in nine cochlear implant patients (age range: 15 to 50 years). All the patients suffered from bilateral deafness, were right-handed, and had no additional neurological deficits. They underwent PET imaging three times: immediately after the first fitting of the speech processor (activation of the cochlear implant), and one and two years later. A tendency towards increasing levels of activation in areas of the primary and secondary auditory cortex on the left side of the brain was observed. There was no clear effect of the side of implantation (left or right) on the degree of cortical activation in the temporal lobe. However, the PET results showed a correlation between degree of cortical activation and speech therapy results. PMID:25306122

  2. Effects of aging and sensory loss on glial cells in mouse visual and auditory cortices.

    PubMed

    Tremblay, Marie-Ève; Zettel, Martha L; Ison, James R; Allen, Paul D; Majewska, Ania K

    2012-04-01

    Normal aging is often accompanied by a progressive loss of receptor sensitivity in hearing and vision, whose consequences on cellular function in cortical sensory areas have remained largely unknown. By examining the primary auditory (A1) and visual (V1) cortices in two inbred strains of mice undergoing either age-related loss of audition (C57BL/6J) or vision (CBA/CaJ), we were able to describe cellular and subcellular changes that were associated with normal aging (occurring in A1 and V1 of both strains) or specifically with age-related sensory loss (only in A1 of C57BL/6J or V1 of CBA/CaJ), using immunocytochemical electron microscopy and light microscopy. While the changes were subtle in neurons, glial cells and especially microglia were transformed in aged animals. Microglia became more numerous and irregularly distributed, displayed more variable cell body and process morphologies, occupied smaller territories, and accumulated phagocytic inclusions that often displayed ultrastructural features of synaptic elements. Additionally, evidence of myelination defects were observed, and aged oligodendrocytes became more numerous and were more often encountered in contiguous pairs. Most of these effects were profoundly exacerbated by age-related sensory loss. Together, our results suggest that the age-related alteration of glial cells in sensory cortical areas can be accelerated by activity-driven central mechanisms that result from an age-related loss of peripheral sensitivity. In light of our observations, these age-related changes in sensory function should be considered when investigating cellular, cortical, and behavioral functions throughout the lifespan in these commonly used C57BL/6J and CBA/CaJ mouse models. Copyright © 2012 Wiley Periodicals, Inc.

  3. Effects of aging and sensory loss on glial cells in mouse visual and auditory cortices

    PubMed Central

    Tremblay, Marie-Ève; Zettel, Martha L.; Ison, James R.; Allen, Paul D.; Majewska, Ania K.

    2011-01-01

    Normal aging is often accompanied by a progressive loss of receptor sensitivity in hearing and vision, whose consequences on cellular function in cortical sensory areas have remained largely unknown. By examining the primary auditory (A1) and visual (V1) cortices in two inbred strains of mice undergoing either age-related loss of audition (C57BL/6J) or vision (CBA/CaJ), we were able to describe cellular and subcellular changes that were associated with normal aging (occurring in A1 and V1 of both strains) or specifically with age-related sensory loss (only in A1 of C57BL/6J or V1 of CBA/CaJ), using immunocytochemical electron microscopy and light microscopy. While the changes were subtle in neurons, glial cells and especially microglia were transformed in aged animals. Microglia became more numerous and irregularly distributed, displayed more variable cell body and process morphologies, occupied smaller territories, and accumulated phagocytic inclusions that often displayed ultrastructural features of synaptic elements. Additionally, evidence of myelination defects were observed, and aged oligodendrocytes became more numerous and were more often encountered in contiguous pairs. Most of these effects were profoundly exacerbated by age-related sensory loss. Together, our results suggest that the age-related alteration of glial cells in sensory cortical areas can be accelerated by activity-driven central mechanisms that result from an age-related loss of peripheral sensitivity. In light of our observations, these age-related changes in sensory function should be considered when investigating cellular, cortical and behavioral functions throughout the lifespan in these commonly used C57BL/6J and CBA/CaJ mouse models. PMID:22223464

  4. Memory Reactivation during Rapid Eye Movement Sleep Promotes Its Generalization and Integration in Cortical Stores

    PubMed Central

    Sterpenich, Virginie; Schmidt, Christina; Albouy, Geneviève; Matarazzo, Luca; Vanhaudenhuyse, Audrey; Boveroux, Pierre; Degueldre, Christian; Leclercq, Yves; Balteau, Evelyne; Collette, Fabienne; Luxen, André; Phillips, Christophe; Maquet, Pierre

    2014-01-01

    Study Objectives: Memory reactivation appears to be a fundamental process in memory consolidation. In this study we tested the influence of memory reactivation during rapid eye movement (REM) sleep on memory performance and brain responses at retrieval in healthy human participants. Participants: Fifty-six healthy subjects (28 women and 28 men, age [mean ± standard deviation]: 21.6 ± 2.2 y) participated in this functional magnetic resonance imaging (fMRI) study. Methods and Results: Auditory cues were associated with pictures of faces during their encoding. These memory cues delivered during REM sleep enhanced subsequent accurate recollections but also false recognitions. These results suggest that reactivated memories interacted with semantically related representations, and induced new creative associations, which subsequently reduced the distinction between new and previously encoded exemplars. Cues had no effect if presented during stage 2 sleep, or if they were not associated with faces during encoding. Functional magnetic resonance imaging revealed that following exposure to conditioned cues during REM sleep, responses to faces during retrieval were enhanced both in a visual area and in a cortical region of multisensory (auditory-visual) convergence. Conclusions: These results show that reactivating memories during REM sleep enhances cortical responses during retrieval, suggesting the integration of recent memories within cortical circuits, favoring the generalization and schematization of the information. Citation: Sterpenich V, Schmidt C, Albouy G, Matarazzo L, Vanhaudenhuyse A, Boveroux P, Degueldre C, Leclercq Y, Balteau E, Collette F, Luxen A, Phillips C, Maquet P. Memory reactivation during rapid eye movement sleep promotes its generalization and integration in cortical stores. SLEEP 2014;37(6):1061-1075. PMID:24882901

  5. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    PubMed

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  6. The influence of gender on auditory and language cortical activation patterns: preliminary data.

    PubMed

    Kocak, Mehmet; Ulmer, John L; Biswal, Bharat B; Aralasmak, Ayse; Daniels, David L; Mark, Leighton P

    2005-10-01

    Intersex cortical and functional asymmetry is an ongoing topic of investigation. In this pilot study, we sought to determine the influence of acoustic scanner noise and sex on auditory and language cortical activation patterns of the dominant hemisphere. Echoplanar functional MR imaging (fMRI; 1.5T) was performed on 12 healthy right-handed subjects (6 men and 6 women). Passive text listening tasks were employed in 2 different background acoustic scanner noise conditions (12 sections/2 seconds TR [6 Hz] and 4 sections/2 seconds TR [2 Hz]), with the first 4 sections in identical locations in the left hemisphere. Cross-correlation analysis was used to construct activation maps in subregions of auditory and language relevant cortex of the dominant (left) hemisphere, and activation areas were calculated by using coefficient thresholds of 0.5, 0.6, and 0.7. Text listening caused robust activation in anatomically defined auditory cortex, and weaker activation in language relevant cortex of all 12 individuals. As a whole, there was no significant difference in regional cortical activation between the 2 background acoustic scanner noise conditions. When sex was considered, men showed a significantly (P < .01) greater change in left hemisphere activation during the high scanner noise rate condition than did women. This effect was significant (P < .05) in the left superior temporal gyrus, the posterior aspect of the left middle temporal gyrus and superior temporal sulcus, and the left inferior frontal gyrus. Increase in the rate of background acoustic scanner noise caused increased activation in auditory and language relevant cortex of the dominant hemisphere in men compared with women where no such change in activation was observed. Our preliminary data suggest possible methodologic confounds of fMRI research and calls for larger investigations to substantiate our findings and further characterize sex-based influences on hemispheric activation patterns.

  7. Early musical training is linked to gray matter structure in the ventral premotor cortex and auditory-motor rhythm synchronization performance.

    PubMed

    Bailey, Jennifer Anne; Zatorre, Robert J; Penhune, Virginia B

    2014-04-01

    Evidence in animals and humans indicates that there are sensitive periods during development, times when experience or stimulation has a greater influence on behavior and brain structure. Sensitive periods are the result of an interaction between maturational processes and experience-dependent plasticity mechanisms. Previous work from our laboratory has shown that adult musicians who begin training before the age of 7 show enhancements in behavior and white matter structure compared with those who begin later. Plastic changes in white matter and gray matter are hypothesized to co-occur; therefore, the current study investigated possible differences in gray matter structure between early-trained (ET; <7) and late-trained (LT; >7) musicians, matched for years of experience. Gray matter structure was assessed using voxel-wise analysis techniques (optimized voxel-based morphometry, traditional voxel-based morphometry, and deformation-based morphometry) and surface-based measures (cortical thickness, surface area and mean curvature). Deformation-based morphometry analyses identified group differences between ET and LT musicians in right ventral premotor cortex (vPMC), which correlated with performance on an auditory motor synchronization task and with age of onset of musical training. In addition, cortical surface area in vPMC was greater for ET musicians. These results are consistent with evidence that premotor cortex shows greatest maturational change between the ages of 6-9 years and that this region is important for integrating auditory and motor information. We propose that the auditory and motor interactions required by musical practice drive plasticity in vPMC and that this plasticity is greatest when maturation is near its peak.

  8. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    PubMed

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  9. Evidence for cue-independent spatial representation in the human auditory cortex during active listening

    PubMed Central

    McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher

    2017-01-01

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357

  10. Functional and structural changes throughout the auditory system following congenital and early-onset deafness: implications for hearing restoration

    PubMed Central

    Butler, Blake E.; Lomber, Stephen G.

    2013-01-01

    The absence of auditory input, particularly during development, causes widespread changes in the structure and function of the auditory system, extending from peripheral structures into auditory cortex. In humans, the consequences of these changes are far-reaching and often include detriments to language acquisition, and associated psychosocial issues. Much of what is currently known about the nature of deafness-related changes to auditory structures comes from studies of congenitally deaf or early-deafened animal models. Fortunately, the mammalian auditory system shows a high degree of preservation among species, allowing for generalization from these models to the human auditory system. This review begins with a comparison of common methods used to obtain deaf animal models, highlighting the specific advantages and anatomical consequences of each. Some consideration is also given to the effectiveness of methods used to measure hearing loss during and following deafening procedures. The structural and functional consequences of congenital and early-onset deafness have been examined across a variety of mammals. This review attempts to summarize these changes, which often involve alteration of hair cells and supporting cells in the cochleae, and anatomical and physiological changes that extend through subcortical structures and into cortex. The nature of these changes is discussed, and the impacts to neural processing are addressed. Finally, long-term changes in cortical structures are discussed, with a focus on the presence or absence of cross-modal plasticity. In addition to being of interest to our understanding of multisensory processing, these changes also have important implications for the use of assistive devices such as cochlear implants. PMID:24324409

  11. Cortical Auditory Evoked Potentials Recorded From Nucleus Hybrid Cochlear Implant Users.

    PubMed

    Brown, Carolyn J; Jeon, Eun Kyung; Chiou, Li-Kuei; Kirby, Benjamin; Karsten, Sue A; Turner, Christopher W; Abbas, Paul J

    2015-01-01

    Nucleus Hybrid Cochlear Implant (CI) users hear low-frequency sounds via acoustic stimulation and high-frequency sounds via electrical stimulation. This within-subject study compares three different methods of coordinating programming of the acoustic and electrical components of the Hybrid device. Speech perception and cortical auditory evoked potentials (CAEP) were used to assess differences in outcome. The goals of this study were to determine whether (1) the evoked potential measures could predict which programming strategy resulted in better outcome on the speech perception task or was preferred by the listener, and (2) CAEPs could be used to predict which subjects benefitted most from having access to the electrical signal provided by the Hybrid implant. CAEPs were recorded from 10 Nucleus Hybrid CI users. Study participants were tested using three different experimental processor programs (MAPs) that differed in terms of how much overlap there was between the range of frequencies processed by the acoustic component of the Hybrid device and range of frequencies processed by the electrical component. The study design included allowing participants to acclimatize for a period of up to 4 weeks with each experimental program prior to speech perception and evoked potential testing. Performance using the experimental MAPs was assessed using both a closed-set consonant recognition task and an adaptive test that measured the signal-to-noise ratio that resulted in 50% correct identification of a set of 12 spondees presented in background noise. Long-duration, synthetic vowels were used to record both the cortical P1-N1-P2 "onset" response and the auditory "change" response (also known as the auditory change complex [ACC]). Correlations between the evoked potential measures and performance on the speech perception tasks are reported. Differences in performance using the three programming strategies were not large. Peak-to-peak amplitude of the ACC was not found to be sensitive enough to accurately predict the programming strategy that resulted in the best performance on either measure of speech perception. All 10 Hybrid CI users had residual low-frequency acoustic hearing. For all 10 subjects, allowing them to use both the acoustic and electrical signals provided by the implant improved performance on the consonant recognition task. For most subjects, it also resulted in slightly larger cortical change responses. However, the impact that listening mode had on the cortical change responses was small, and again, the correlation between the evoked potential and speech perception results was not significant. CAEPs can be successfully measured from Hybrid CI users. The responses that are recorded are similar to those recorded from normal-hearing listeners. The goal of this study was to see if CAEPs might play a role either in identifying the experimental program that resulted in best performance on a consonant recognition task or in documenting benefit from the use of the electrical signal provided by the Hybrid CI. At least for the stimuli and specific methods used in this study, no such predictive relationship was found.

  12. Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing

    PubMed Central

    Rauschecker, Josef P; Scott, Sophie K

    2010-01-01

    Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production. PMID:19471271

  13. Pairing broadband noise with cortical stimulation induces extensive suppression of ascending sensory activity

    NASA Astrophysics Data System (ADS)

    Markovitz, Craig D.; Hogan, Patrick S.; Wesen, Kyle A.; Lim, Hubert H.

    2015-04-01

    Objective. The corticofugal system can alter coding along the ascending sensory pathway. Within the auditory system, electrical stimulation of the auditory cortex (AC) paired with a pure tone can cause egocentric shifts in the tuning of auditory neurons, making them more sensitive to the pure tone frequency. Since tinnitus has been linked with hyperactivity across auditory neurons, we sought to develop a new neuromodulation approach that could suppress a wide range of neurons rather than enhance specific frequency-tuned neurons. Approach. We performed experiments in the guinea pig to assess the effects of cortical stimulation paired with broadband noise (PN-Stim) on ascending auditory activity within the central nucleus of the inferior colliculus (CNIC), a widely studied region for AC stimulation paradigms. Main results. All eight stimulated AC subregions induced extensive suppression of activity across the CNIC that was not possible with noise stimulation alone. This suppression built up over time and remained after the PN-Stim paradigm. Significance. We propose that the corticofugal system is designed to decrease the brain’s input gain to irrelevant stimuli and PN-Stim is able to artificially amplify this effect to suppress neural firing across the auditory system. The PN-Stim concept may have potential for treating tinnitus and other neurological disorders.

  14. Bidirectional Regulation of Innate and Learned Behaviors That Rely on Frequency Discrimination by Cortical Inhibitory Neurons

    PubMed Central

    Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.

    2015-01-01

    The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746

  15. Detection Rates of Cortical Auditory Evoked Potentials at Different Sensation Levels in Infants with Sensory/Neural Hearing Loss and Auditory Neuropathy Spectrum Disorder

    PubMed Central

    Gardner-Berry, Kirsty; Chang, Hsiuwen; Ching, Teresa Y. C.; Hou, Sanna

    2016-01-01

    With the introduction of newborn hearing screening, infants are being diagnosed with hearing loss during the first few months of life. For infants with a sensory/neural hearing loss (SNHL), the audiogram can be estimated objectively using auditory brainstem response (ABR) testing and hearing aids prescribed accordingly. However, for infants with auditory neuropathy spectrum disorder (ANSD) due to the abnormal/absent ABR waveforms, alternative measures of auditory function are needed to assess the need for amplification and evaluate whether aided benefit has been achieved. Cortical auditory evoked potentials (CAEPs) are used to assess aided benefit in infants with hearing loss; however, there is insufficient information regarding the relationship between stimulus audibility and CAEP detection rates. It is also not clear whether CAEP detection rates differ between infants with SNHL and infants with ANSD. This study involved retrospective collection of CAEP, hearing threshold, and hearing aid gain data to investigate the relationship between stimulus audibility and CAEP detection rates. The results demonstrate that increases in stimulus audibility result in an increase in detection rate. For the same range of sensation levels, there was no difference in the detection rates between infants with SNHL and ANSD. PMID:27587922

  16. The Central Role of Recognition in Auditory Perception: A Neurobiological Model

    ERIC Educational Resources Information Center

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…

  17. Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment.

    PubMed

    Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas

    2010-07-01

    Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  18. Inter-subject synchronization of brain responses during natural music listening

    PubMed Central

    Abrams, Daniel A.; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J.; Menon, Vinod

    2015-01-01

    Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real-world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. PMID:23578016

  19. [Thalamus and Attention].

    PubMed

    Tokoro, Kazuhiko; Sato, Hironobu; Yamamoto, Mayumi; Nagai, Yoshiko

    2015-12-01

    Attention is the process by which information and selection occurs, the thalamus plays an important role in the selective attention of visual and auditory information. Selective attention is a conscious effort; however, it occurs subconsciously, as well. The lateral geniculate body (LGB) filters visual information before it reaches the cortex (bottom-up attention). The thalamic reticular nucleus (TRN) provides a strong inhibitory input to both the LGB and pulvinar. This regulation involves focusing a spotlight on important information, as well as inhibiting unnecessary background information. Behavioral contexts more strongly modulate activity of the TRN and pulvinar influencing feedforward and feedback information transmission between the frontal, temporal, parietal and occipital cortical areas (top-down attention). The medial geniculate body (MGB) filters auditory information the TRN inhibits the MGB. Attentional modulation occurring in the auditory pathway among the cochlea, cochlear nucleus, superior olivary complex, and inferior colliculus is more important than that of the MGB and TRN. We also discuss the attentional consequence of thalamic hemorrhage.

  20. Auditory cortex of newborn bats is prewired for echolocation.

    PubMed

    Kössl, Manfred; Voss, Cornelia; Mora, Emanuel C; Macias, Silvio; Foeller, Elisabeth; Vater, Marianne

    2012-04-10

    Neuronal computation of object distance from echo delay is an essential task that echolocating bats must master for spatial orientation and the capture of prey. In the dorsal auditory cortex of bats, neurons specifically respond to combinations of short frequency-modulated components of emitted call and delayed echo. These delay-tuned neurons are thought to serve in target range calculation. It is unknown whether neuronal correlates of active space perception are established by experience-dependent plasticity or by innate mechanisms. Here we demonstrate that in the first postnatal week, before onset of echolocation and flight, dorsal auditory cortex already contains functional circuits that calculate distance from the temporal separation of a simulated pulse and echo. This innate cortical implementation of a purely computational processing mechanism for sonar ranging should enhance survival of juvenile bats when they first engage in active echolocation behaviour and flight.

  1. Cortical Sensitivity to Guitar Note Patterns: EEG Entrainment to Repetition and Key.

    PubMed

    Bridwell, David A; Leslie, Emily; McCoy, Dakarai Q; Plis, Sergey M; Calhoun, Vince D

    2017-01-01

    Music is ubiquitous throughout recent human culture, and many individual's have an innate ability to appreciate and understand music. Our appreciation of music likely emerges from the brain's ability to process a series of repeated complex acoustic patterns. In order to understand these processes further, cortical responses were measured to a series of guitar notes presented with a musical pattern or without a pattern. ERP responses to individual notes were measured using a 24 electrode Bluetooth mobile EEG system (Smarting mBrainTrain) while 13 healthy non-musicians listened to structured (i.e., within musical keys and with repetition) or random sequences of guitar notes for 10 min each. We demonstrate an increased amplitude to the ERP that appears ~200 ms to notes presented within the musical sequence. This amplitude difference between random notes and patterned notes likely reflects individual's cortical sensitivity to guitar note patterns. These amplitudes were compared to ERP responses to a rare note embedded within a stream of frequent notes to determine whether the sensitivity to complex musical structure overlaps with the sensitivity to simple irregularities reflected in traditional auditory oddball experiments. Response amplitudes to the negative peak at ~175 ms are statistically correlated with the mismatch negativity (MMN) response measured to a rare note presented among a series of frequent notes (i.e., in a traditional oddball sequence), but responses to the subsequent positive peak at ~200 do not show a statistical relationship with the P300 response. Thus, the sensitivity to musical structure identified to 4 Hz note patterns appears somewhat distinct from the sensitivity to statistical regularities reflected in the traditional "auditory oddball" sequence. Overall, we suggest that this is a promising approach to examine individual's sensitivity to complex acoustic patterns, which may overlap with higher level cognitive processes, including language.

  2. Encoding of sound envelope transients in the auditory cortex of juvenile rats and adult rats.

    PubMed

    Lu, Qi; Jiang, Cuiping; Zhang, Jiping

    2016-02-01

    Accurate neural processing of time-varying sound amplitude and spectral information is vital for species-specific communication. During postnatal development, cortical processing of sound frequency undergoes progressive refinement; however, it is not clear whether cortical processing of sound envelope transients also undergoes age-related changes. We determined the dependence of neural response strength and first-spike latency on sound rise-fall time across sound levels in the primary auditory cortex (A1) of juvenile (P20-P30) rats and adult (8-10 weeks) rats. A1 neurons were categorized as "all-pass", "short-pass", or "mixed" ("all-pass" at high sound levels to "short-pass" at lower sound levels) based on the normalized response strength vs. rise-fall time functions across sound levels. The proportions of A1 neurons within each of the three categories in juvenile rats were similar to that in adult rats. In general, with increasing rise-fall time, the average response strength decreased and the average first-spike latency increased in A1 neurons of both groups. At a given sound level and rise-fall time, the average normalized neural response strength did not differ significantly between the two age groups. However, the A1 neurons in juvenile rats showed greater absolute response strength, longer first-spike latency compared to those in adult rats. In addition, at a constant sound level, the average first-spike latency of juvenile A1 neurons was more sensitive to changes in rise-fall time. Our results demonstrate the dependence of the responses of rat A1 neurons on sound rise-fall time, and suggest that the response latency exhibit some age-related changes in cortical representation of sound envelope rise time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Effects of long-term non-traumatic noise exposure on the adult central auditory system. Hearing problems without hearing loss.

    PubMed

    Eggermont, Jos J

    2017-09-01

    It is known that hearing loss induces plastic changes in the brain, causing loudness recruitment and hyperacusis, increased spontaneous firing rates and neural synchrony, reorganizations of the cortical tonotopic maps, and tinnitus. Much less in known about the central effects of exposure to sounds that cause a temporary hearing loss, affect the ribbon synapses in the inner hair cells, and cause a loss of high-threshold auditory nerve fibers. In contrast there is a wealth of information about central effects of long-duration sound exposures at levels ≤80 dB SPL that do not even cause a temporary hearing loss. The central effects for these moderate level exposures described in this review include changes in central gain, increased spontaneous firing rates and neural synchrony, and reorganization of the cortical tonotopic map. A putative mechanism is outlined, and the effect of the acoustic environment during the recovery process is illustrated. Parallels are drawn with hearing problems in humans with long-duration exposures to occupational noise but with clinical normal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Cortical evoked responses associated with arousal from sleep.

    PubMed

    Phillips, Derrick J; Schei, Jennifer L; Meighan, Peter C; Rector, David M

    2011-01-01

    To determine if low-level intermittent auditory stimuli have the potential to disrupt sleep during 24-h recordings, we assessed arousal occurrence to varying stimulus intensities. Additionally, if stimulus-generated evoked response potential (ERP) components provide a metric of underlying cortical state, then a particular ERP structure may precede an arousal. Physiological electrodes measuring EEG, EKG, and EMG were implanted into 5 adult female Sprague-Dawley rats. We delivered auditory stimuli of varying intensities (50-75 dBa sound pressure level SPL) at random intervals of 6-12 s over a 24-hour period. Recordings were divided into 2-s epochs and scored for sleep/wake state. Following each stimulus, we identified whether the animal stayed asleep or woke. We then sorted the stimuli depending on prior and post-stimulus state, and measured ERP components. Auditory stimuli did not produce a significant increase in the number of arousals compared to silent control periods. Overall, arousal from REM sleep occurred more often compared to quiet sleep. ERPs preceding an arousal had decreased mean area and shorter N1 latency. Low level auditory stimuli did not fragment animal sleep since we observed no significant change in arousal occurrence. Arousals that occurred within 4 s of a stimulus exhibited an ERP mean area and latency had features similar to ERPs generated during wake, indicating that the underlying cortical tissue state may contribute to physiological conditions required for arousal.

  5. Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

    PubMed

    Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius

    2018-06-01

    Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Neural signatures of second language learning and control.

    PubMed

    Bartolotti, James; Bradley, Kailyn; Hernandez, Arturo E; Marian, Viorica

    2017-04-01

    Experience with multiple languages has unique effects on cortical structure and information processing. Differences in gray matter density and patterns of cortical activation are observed in lifelong bilinguals compared to monolinguals as a result of their experience managing interference across languages. Monolinguals who acquire a second language later in life begin to encounter the same type of linguistic interference as bilinguals, but with a different pre-existing language architecture. The current study used functional magnetic resonance imaging to explore the beginning stages of second language acquisition and cross-linguistic interference in monolingual adults. We found that after English monolinguals learned novel Spanish vocabulary, English and Spanish auditory words led to distinct patterns of cortical activation, with greater recruitment of posterior parietal regions in response to English words and of left hippocampus in response to Spanish words. In addition, cross-linguistic interference from English influenced processing of newly-learned Spanish words, decreasing hippocampus activity. Results suggest that monolinguals may rely on different memory systems to process a newly-learned second language, and that the second language system is sensitive to native language interference. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Cortical Interactions Underlying the Production of Speech Sounds

    ERIC Educational Resources Information Center

    Guenther, Frank H.

    2006-01-01

    Speech production involves the integration of auditory, somatosensory, and motor information in the brain. This article describes a model of speech motor control in which a feedforward control system, involving premotor and primary motor cortex and the cerebellum, works in concert with auditory and somatosensory feedback control systems that…

  8. A MEG Investigation of Single-Word Auditory Comprehension in Aphasia

    ERIC Educational Resources Information Center

    Zipse, Lauryn; Kearns, Kevin; Nicholas, Marjorie; Marantz, Alec

    2011-01-01

    Purpose: To explore whether individuals with aphasia exhibit differences in the M350, an electrophysiological marker of lexical activation, compared with healthy controls. Method: Seven people with aphasia, 9 age-matched controls, and 10 younger controls completed an auditory lexical decision task while cortical activity was recorded with…

  9. Theoretical Tinnitus Framework: A Neurofunctional Model.

    PubMed

    Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C B; Sani, Siamak S; Ekhtiari, Hamed; Sanchez, Tanit G

    2016-01-01

    Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the "sourceless" sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques.

  10. Theoretical Tinnitus Framework: A Neurofunctional Model

    PubMed Central

    Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C. B.; Sani, Siamak S.; Ekhtiari, Hamed; Sanchez, Tanit G.

    2016-01-01

    Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the “sourceless” sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques. PMID:27594822

  11. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    PubMed Central

    Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; de Carvalho, Ana Claudia Martinho; Matas, Carla Gentile

    2015-01-01

    The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP); speech perception tests of the Glendonald Auditory Screening Procedure (GASP); Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS); and Meaningful Use of Speech Scales (MUSS). The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms). In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms). The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI. PMID:26881163

  12. Mismatch negativity evoked by the McGurk-MacDonald effect: a phonetic representation within short-term memory.

    PubMed

    Colin, C; Radeau, M; Soquet, A; Demolin, D; Colin, F; Deltenre, P

    2002-04-01

    The McGurk-MacDonald illusory percept is obtained by dubbing an incongruent articulatory movement on an auditory phoneme. This type of audiovisual speech perception contributes to the assessment of theories of speech perception. The mismatch negativity (MMN) reflects the detection of a deviant stimulus within the auditory short-term memory and besides an acoustic component, possesses, under certain conditions, a phonetic one. The present study assessed the existence of an MMN evoked by McGurk-MacDonald percepts elicited by audiovisual stimuli with constant auditory components. Cortical evoked potentials were recorded using the oddball paradigm on 8 adults in 3 experimental conditions: auditory alone, visual alone and audiovisual stimulation. Obtaining illusory percepts was confirmed in an additional psychophysical condition. The auditory deviant syllables and the audiovisual incongruent syllables elicited a significant MMN at Fz. In the visual condition, no negativity was observed either at Fz, or at O(z). An MMN can be evoked by visual articulatory deviants, provided they are presented in a suitable auditory context leading to a phonetically significant interaction. The recording of an MMN elicited by illusory McGurk percepts suggests that audiovisual integration mechanisms in speech take place rather early during the perceptual processes.

  13. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    PubMed

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Mapping perception to action in piano practice: a longitudinal DC-EEG study

    PubMed Central

    Bangert, Marc; Altenmüller, Eckart O

    2003-01-01

    Background Performing music requires fast auditory and motor processing. Regarding professional musicians, recent brain imaging studies have demonstrated that auditory stimulation produces a co-activation of motor areas, whereas silent tapping of musical phrases evokes a co-activation in auditory regions. Whether this is obtained via a specific cerebral relay station is unclear. Furthermore, the time course of plasticity has not yet been addressed. Results Changes in cortical activation patterns (DC-EEG potentials) induced by short (20 minute) and long term (5 week) piano learning were investigated during auditory and motoric tasks. Two beginner groups were trained. The 'map' group was allowed to learn the standard piano key-to-pitch map. For the 'no-map' group, random assignment of keys to tones prevented such a map. Auditory-sensorimotor EEG co-activity occurred within only 20 minutes. The effect was enhanced after 5-week training, contributing elements of both perception and action to the mental representation of the instrument. The 'map' group demonstrated significant additional activity of right anterior regions. Conclusion We conclude that musical training triggers instant plasticity in the cortex, and that right-hemispheric anterior areas provide an audio-motor interface for the mental representation of the keyboard. PMID:14575529

  15. A Brain for Speech. Evolutionary Continuity in Primate and Human Auditory-Vocal Processing

    PubMed Central

    Aboitiz, Francisco

    2018-01-01

    In this review article, I propose a continuous evolution from the auditory-vocal apparatus and its mechanisms of neural control in non-human primates, to the peripheral organs and the neural control of human speech. Although there is an overall conservatism both in peripheral systems and in central neural circuits, a few changes were critical for the expansion of vocal plasticity and the elaboration of proto-speech in early humans. Two of the most relevant changes were the acquisition of direct cortical control of the vocal fold musculature and the consolidation of an auditory-vocal articulatory circuit, encompassing auditory areas in the temporoparietal junction and prefrontal and motor areas in the frontal cortex. This articulatory loop, also referred to as the phonological loop, enhanced vocal working memory capacity, enabling early humans to learn increasingly complex utterances. The auditory-vocal circuit became progressively coupled to multimodal systems conveying information about objects and events, which gradually led to the acquisition of modern speech. Gestural communication accompanies the development of vocal communication since very early in human evolution, and although both systems co-evolved tightly in the beginning, at some point speech became the main channel of communication. PMID:29636657

  16. Spectral integration in primary auditory cortex attributable to temporally precise convergence of thalamocortical and intracortical input.

    PubMed

    Happel, Max F K; Jeschke, Marcus; Ohl, Frank W

    2010-08-18

    Primary sensory cortex integrates sensory information from afferent feedforward thalamocortical projection systems and convergent intracortical microcircuits. Both input systems have been demonstrated to provide different aspects of sensory information. Here we have used high-density recordings of laminar current source density (CSD) distributions in primary auditory cortex of Mongolian gerbils in combination with pharmacological silencing of cortical activity and analysis of the residual CSD, to dissociate the feedforward thalamocortical contribution and the intracortical contribution to spectral integration. We found a temporally highly precise integration of both types of inputs when the stimulation frequency was in close spectral neighborhood of the best frequency of the measurement site, in which the overlap between both inputs is maximal. Local intracortical connections provide both directly feedforward excitatory and modulatory input from adjacent cortical sites, which determine how concurrent afferent inputs are integrated. Through separate excitatory horizontal projections, terminating in cortical layers II/III, information about stimulus energy in greater spectral distance is provided even over long cortical distances. These projections effectively broaden spectral tuning width. Based on these data, we suggest a mechanism of spectral integration in primary auditory cortex that is based on temporally precise interactions of afferent thalamocortical inputs and different short- and long-range intracortical networks. The proposed conceptual framework allows integration of different and partly controversial anatomical and physiological models of spectral integration in the literature.

  17. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging

    PubMed Central

    Henschke, Julia U.; Ohl, Frank W.; Budinger, Eike

    2018-01-01

    During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals. PMID:29551970

  18. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging.

    PubMed

    Henschke, Julia U; Ohl, Frank W; Budinger, Eike

    2018-01-01

    During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals.

  19. Signal Processing, Pattern Formation and Adaptation in Neural Oscillators

    DTIC Science & Technology

    2016-11-29

    nonlinear oscillations of outer hair cells. We obtained analytical forms for auditory tuning curves of both unidirectionally and bidirectionally coupled...oscillations of outer hair cells in the cochlea, mode-locking of chopper cells to sound in the cochlear nucleus, and entrainment of cortical...oscillations of outer hair cells (e.g., Fredrickson-Hemsing, Ji, Bruinsma, & Bozovic, 2012), mode-locking of choppers in the cochlear nucleus (e.g., Laudanski

  20. Electrocorticographic representations of segmental features in continuous speech

    PubMed Central

    Lotte, Fabien; Brumberg, Jonathan S.; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Guan, Cuntai; Schalk, Gerwin

    2015-01-01

    Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface [electrocorticography (ECoG)] to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates. PMID:25759647

Top