Mendez, M F; Geehan, G R
The symptoms of two patients with bilateral cortical auditory lesions evolved from cortical deafness to other auditory syndromes: generalised auditory agnosia, amusia and/or pure word deafness, and a residual impairment of temporal sequencing. On investigation, both had dysacusis, absent middle latency evoked responses, acoustic errors in sound recognition and matching, inconsistent auditory behaviours, and similarly disturbed psychoacoustic discrimination tasks. These findings indicate that the different clinical syndromes caused by cortical auditory lesions form a spectrum of related auditory processing disorders. Differences between syndromes may depend on the degree of involvement of a primary cortical processing system, the more diffuse accessory system, and possibly the efferent auditory system. Images PMID:2450968
Ruusuvirta, Timo; Lipponen, Arto; Pellinen, Eeva; Penttonen, Markku; Astikainen, Piia
Any change in the invariant aspects of the auditory environment is of potential importance. The human brain preattentively or automatically detects such changes. The mismatch negativity (MMN) of event-related potentials (ERPs) reflects this initial stage of auditory change detection. The origin of MMN is held to be cortical. The hippocampus is associated with a later generated P3a of ERPs reflecting involuntarily attention switches towards auditory changes that are high in magnitude. The evidence for this cortico-hippocampal dichotomy is scarce, however. To shed further light on this issue, auditory cortical and hippocampal-system (CA1, dentate gyrus, subiculum) local-field potentials were recorded in urethane-anesthetized rats. A rare tone in duration (deviant) was interspersed with a repeated tone (standard). Two standard-to-standard (SSI) and standard-to-deviant (SDI) intervals (200 ms vs. 500 ms) were applied in different combinations to vary the observability of responses resembling MMN (mismatch responses). Mismatch responses were observed at 51.5-89 ms with the 500-ms SSI coupled with the 200-ms SDI but not with the three remaining combinations. Most importantly, the responses appeared in both the auditory-cortical and hippocampal locations. The findings suggest that the hippocampus may play a role in (cortical) manifestation of MMN.
Bizley, Jennifer; Shamma, Shihab A.; Wang, Xiaoqin
The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. PMID:25392481
Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin
The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.
Nelson, Anders; Schneider, David M; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan; Mooney, Richard
Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity.
Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan
Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287
Kraus, Nina; Nicol, Trent
We have developed a data-driven conceptual framework that links two areas of science: the source-filter model of acoustics and cortical sensory processing streams. The source-filter model describes the mechanics behind speech production: the identity of the speaker is carried largely in the vocal cord source and the message is shaped by the ever-changing filters of the vocal tract. Sensory processing streams, popularly called 'what' and 'where' pathways, are well established in the visual system as a neural scheme for separately carrying different facets of visual objects, namely their identity and their position/motion, to the cortex. A similar functional organization has been postulated in the auditory system. Both speaker identity and the spoken message, which are simultaneously conveyed in the acoustic structure of speech, can be disentangled into discrete brainstem response components. We argue that these two response classes are early manifestations of auditory 'what' and 'where' streams in the cortex. This brainstem link forges a new understanding of the relationship between the acoustics of speech and cortical processing streams, unites two hitherto separate areas in science, and provides a model for future investigations of auditory function.
Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia
The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab(®)) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the
Acoustic environments are composed of complex overlapping sounds that the auditory system is required to segregate into discrete perceptual objects. The functions of distinct auditory processing stations in this challenging task are poorly understood. Here we show a direct role for mouse auditory cortex in detection and segregation of acoustic information. We measured the sensitivity of auditory cortical neurons to brief tones embedded in masking noise. By altering spectrotemporal characteristics of the masker, we reveal that sensitivity to pure tone stimuli is strongly enhanced in coherently modulated broadband noise, corresponding to the psychoacoustic phenomenon comodulation masking release. Improvements in detection were largest following priming periods of noise alone, indicating that cortical segregation is enhanced over time. Transient opsin-mediated silencing of auditory cortex during the priming period almost completely abolished these improvements, suggesting that cortical processing may play a direct and significant role in detection of quiet sounds in noisy environments. SIGNIFICANCE STATEMENT Auditory systems are adept at detecting and segregating competing sound sources, but there is little direct evidence of how this process occurs in the mammalian auditory pathway. We demonstrate that coherent broadband noise enhances signal representation in auditory cortex, and that prolonged exposure to noise is necessary to produce this enhancement. Using optogenetic perturbation to selectively silence auditory cortex during early noise processing, we show that cortical processing plays a crucial role in the segregation of competing sounds. PMID:27927950
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco
Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.
Ades, H. W.
The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.
Ades, H. W.
The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.
Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X
The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical
Puvvada, Krishna C; Simon, Jonathan Z
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex.SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory
Lee, Hweeling; Noppeney, Uta
To form a coherent percept of the environment, the brain needs to bind sensory signals emanating from a common source, but to segregate those from different sources . Temporal correlations and synchrony act as prominent cues for multisensory integration [2-4], but the neural mechanisms by which such cues are identified remain unclear. Predictive coding suggests that the brain iteratively optimizes an internal model of its environment by minimizing the errors between its predictions and the sensory inputs [5,6]. This model enables the brain to predict the temporal evolution of natural audiovisual inputs and their statistical (for example, temporal) relationship. A prediction of this theory is that asynchronous audiovisual signals violating the model's predictions induce an error signal that depends on the directionality of the audiovisual asynchrony. As the visual system generates the dominant temporal predictions for visual leading asynchrony, the delayed auditory inputs are expected to generate a prediction error signal in the auditory system (and vice versa for auditory leading asynchrony). Using functional magnetic resonance imaging (fMRI), we measured participants' brain responses to synchronous, visual leading and auditory leading movies of speech, sinewave speech or music. In line with predictive coding, auditory leading asynchrony elicited a prediction error in visual cortices and visual leading asynchrony in auditory cortices. Our results reveal predictive coding as a generic mechanism to temporally bind signals from multiple senses into a coherent percept. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rojas, Donald C.; Slason, Erin; Teale, Peter D.; Reite, Martin L.
Deficits in basic auditory perception have been described in schizophrenia. Previous electrophysiological imaging research has documented a structure-function disassociation in the auditory system and altered tonotopic mapping in schizophrenia. The present study examined auditory cortical tuning in patients with schizophrenia. Eighteen patients with schizophrenia and 15 comparison subjects were recorded in a magnetoencephalographic (MEG) experiment of auditory tuning. Auditory cortical tuning at 1 kHz was examined by delivering 1 kHz pure tones in conjunction with pure tones at 5 frequencies surrounding and including 1 kHz. Source reconstruction data were examined for evidence of frequency specificity for the M100 component. There was a significant broadening of tuning in the schizophrenia group evident for the source amplitude of the M100. The frequently reported reduction in anterior-posterior source asymmetry for individuals with schizophrenia was replicated in this experiment. No relationships between symptom severity ratings and MEG measures were observed. This finding suggests that the frequency specificity of the M100 auditory evoked field is disturbed in schizophrenia, and may help explain the relatively poor behavioral performance of schizophrenia patients on simple frequency discrimination tasks. PMID:17851045
Malone, Brian J; Scott, Brian H; Semple, Malcolm N
The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression. Spike train classification methods revealed that many neurons robustly encoded tone duration despite substantial diversity in the encoding process. Excellent discrimination performance was achieved by neurons whose responses were primarily phasic at tone offset and by those that responded robustly while the tone persisted. Although diverse cortical response patterns converged on effective duration discrimination, this diversity significantly constrained the utility of decoding models referenced to a spiking pattern averaged across all responses or averaged within the same response category. Using maximum likelihood-based decoding models, we demonstrated that the spike train recorded in a single trial could support direct estimation of stimulus onset and offset. Comparisons between different decoding models established the substantial contribution of bursts of activity at sound onset and offset to demarcating the temporal boundaries of gated tones. Our results indicate that relatively few neurons suffice to provide temporally precise estimates of such auditory "edges," particularly for models that assume and exploit the heterogeneity of neural responses in awake cortex.
Froemke, Robert C.; Martins, Ana Raquel O.
The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. PMID:21426927
Berlau, Kasia M.; Weinberger, Norman M.
Learning modifies the primary auditory cortex (A1) to emphasize the processing and representation of behaviorally relevant sounds. However, the factors that determine cortical plasticity are poorly understood. While the type and amount of learning are assumed to be important, the actual strategies used to solve learning problems might be critical. To investigate this possibility, we trained two groups of adult male Sprague–Dawley rats to bar-press (BP) for water contingent on the presence of a 5.0 kHz tone using two different strategies: BP during tone presence or BP from tone-onset until receiving an error signal after tone cessation. Both groups achieved the same high levels of correct performance and both groups revealed equivalent learning of absolute frequency during training. Post-training terminal “mapping” of A1 showed no change in representational area of the tone signal frequency but revealed other substantial cue-specific plasticity that developed only in the tone-onset-to-error strategy group. Threshold was decreased ~10 dB and tuning bandwidth was narrowed by ~0.7 octaves. As sound onsets have greater perceptual weighting and cortical discharge efficacy than continual sound presence, the induction of specific learning-induced cortical plasticity may depend on the use of learning strategies that best exploit cortical proclivities. The present results also suggest a general principle for the induction and storage of plasticity in learning, viz., that the representation of specific acquired information may be selected by neurons according to a match between behaviorally selected stimulus features and circuit/network response properties. PMID:17707663
Atencio, Craig A.; Sharpee, Tatyana O.; Schreiner, Christoph E.
SUMMARY Cortical receptive fields represent the signal preferences of sensory neurons. Receptive fields are thought to provide a representation of sensory experience from which the cerebral cortex may make interpretations. While it is essential to determine a neuron’s receptive field, it remains unclear which features of the acoustic environment are specifically represented by neurons in the primary auditory cortex (AI). We characterized cat AI spectrotemporal receptive fields (STRFs) by finding both the spike-triggered average (STA) and stimulus dimensions that maximized the mutual information between response and stimulus. We derived a nonlinearity relating spiking to stimulus projection onto two maximally informative dimensions (MIDs). The STA was highly correlated with the first MID. Generally, the nonlinearity for the first MID was asymmetric and often monotonic in shape, while the second MID nonlinearity was symmetric and non-monotonic. The joint nonlinearity for both MIDs revealed that most first and second MIDs were synergistic, and thus should be considered conjointly. The difference between the nonlinearities suggests different possible roles for the MIDs in auditory processing. PMID:18579084
Atencio, Craig A; Sharpee, Tatyana O; Schreiner, Christoph E
Cortical receptive fields represent the signal preferences of sensory neurons. Receptive fields are thought to provide a representation of sensory experience from which the cerebral cortex may make interpretations. While it is essential to determine a neuron's receptive field, it remains unclear which features of the acoustic environment are specifically represented by neurons in the primary auditory cortex (AI). We characterized cat AI spectrotemporal receptive fields (STRFs) by finding both the spike-triggered average (STA) and stimulus dimensions that maximized the mutual information between response and stimulus. We derived a nonlinearity relating spiking to stimulus projection onto two maximally informative dimensions (MIDs). The STA was highly correlated with the first MID. Generally, the nonlinearity for the first MID was asymmetric and often monotonic in shape, while the second MID nonlinearity was symmetric and nonmonotonic. The joint nonlinearity for both MIDs revealed that most first and second MIDs were synergistic and thus should be considered conjointly. The difference between the nonlinearities suggests different possible roles for the MIDs in auditory processing.
Thoma, R J; Hanlon, F M; Sanchez, N; Weisend, M P; Huang, M; Jones, A; Miller, G A; Canive, J M
Both an EEG P50 sensory gating deficit and abnormalities of the temporal lobe structure are considered characteristic of schizophrenia. The standard P50 sensory gating measure does not foster differential assessment of left- and right-hemisphere contributions, but its analogous MEG M50 component may be used to measure gating of distinct auditory source dipoles localizing to left- and right-hemisphere primary auditory cortex. The present study sought to determine how sensory gating ratio may relate to cortical thickness at the site of the auditory dipole localization. A standard auditory paired-click paradigm was used during MEG for patients (n=22) and normal controls (n=11). Sensory gating ratios were determined by measuring the strength of the 50 ms response to the second click divided by that of the first click (S2/S1). Cortical thickness was assessed by two reliable raters using 3D sMRI. Results showed that: (1) patients had a P50 and left M50 sensory gating deficit relative to controls; (2) cortex in both hemispheres was thicker in the control group; (3) in schizophrenia, poorer left-hemisphere M50 sensory gating correlated with thinner left-hemisphere auditory cortical thickness; and (4) poorer right-hemisphere M50 auditory sensory gating ratio correlated with thinner right-hemisphere auditory cortical thickness in patients. The MEG-assessed hemisphere-specific auditory sensory gating ratio may be driven by this structural abnormality in auditory cortex.
In a complex auditory scene, a “cocktail party” for example, listeners can disentangle multiple competing sequences of sounds. A recent psychophysical study in our laboratory demonstrated a robust spatial component of stream segregation showing ∼8° acuity. Here, we recorded single- and multiple-neuron responses from the primary auditory cortex of anesthetized cats while presenting interleaved sound sequences that human listeners would experience as segregated streams. Sequences of broadband sounds alternated between pairs of locations. Neurons synchronized preferentially to sounds from one or the other location, thereby segregating competing sound sequences. Neurons favoring one source location or the other tended to aggregate within the cortex, suggestive of modular organization. The spatial acuity of stream segregation was as narrow as ∼10°, markedly sharper than the broad spatial tuning for single sources that is well known in the literature. Spatial sensitivity was sharpest among neurons having high characteristic frequencies. Neural stream segregation was predicted well by a parameter-free model that incorporated single-source spatial sensitivity and a measured forward-suppression term. We found that the forward suppression was not due to post discharge adaptation in the cortex and, therefore, must have arisen in the subcortical pathway or at the level of thalamocortical synapses. A linear-classifier analysis of single-neuron responses to rhythmic stimuli like those used in our psychophysical study yielded thresholds overlapping those of human listeners. Overall, the results indicate that the ascending auditory system does the work of segregating auditory streams, bringing them to discrete modules in the cortex for selection by top-down processes. PMID:23825404
Munivrana, Boska; Mildner, Vesna
In some cochlear implant users, success is not achieved in spite of optimal clinical factors (including age at implantation, duration of rehabilitation and post-implant hearing level), which may be attributed to disorders at higher levels of the auditory pathway. We used cortical auditory evoked potentials to investigate the ability to perceive…
Munivrana, Boska; Mildner, Vesna
In some cochlear implant users, success is not achieved in spite of optimal clinical factors (including age at implantation, duration of rehabilitation and post-implant hearing level), which may be attributed to disorders at higher levels of the auditory pathway. We used cortical auditory evoked potentials to investigate the ability to perceive…
Guenther, Frank H.; Nieto-Castanon, Alfonso; Ghosh, Satrajit S.; Tourville, Jason A.
Functional magnetic resonance imaging (fMRI) was used to investigate the representation of sound categories in human auditory cortex. Experiment 1 investigated the representation of prototypical (good) and nonprototypical (bad) examples of a vowel sound. Listening to prototypical examples of a vowel resulted in less auditory cortical activation…
Guenther, Frank H.; Nieto-Castanon, Alfonso; Ghosh, Satrajit S.; Tourville, Jason A.
Functional magnetic resonance imaging (fMRI) was used to investigate the representation of sound categories in human auditory cortex. Experiment 1 investigated the representation of prototypical (good) and nonprototypical (bad) examples of a vowel sound. Listening to prototypical examples of a vowel resulted in less auditory cortical activation…
Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.
Sharma, Anu; Cardon, Garrett
Cortical development is dependent to a large extent on stimulus-driven input. Auditory Neuropathy Spectrum Disorder (ANSD) is a recently described form of hearing impairment where neural dys-synchrony is the predominant characteristic. Children with ANSD provide a unique platform to examine the effects of asynchronous and degraded afferent stimulation on cortical auditory neuroplasticity and behavioral processing of sound. In this review, we describe patterns of auditory cortical maturation in children with ANSD. The disruption of cortical maturation that leads to these various patterns includes high levels of intra-individual cortical variability and deficits in cortical phase synchronization of oscillatory neural responses. These neurodevelopmental changes, which are constrained by sensitive periods for central auditory maturation, are correlated with behavioral outcomes for children with ANSD. Overall, we hypothesize that patterns of cortical development in children with ANSD appear to be markers of the severity of the underlying neural dys-synchrony, providing prognostic indicators of success of clinical intervention with amplification and/or electrical stimulation. PMID:26070426
Bakhos, D; Roux, S; Robier, A; Bonnet-Brilhault, F; Lescanne, E; Bruneau, N
In congenitally deaf children fit with a cochlear implant, little is known about the maturation of the auditory cortex. Cortical auditory evoked potentials are a useful methodology to study the auditory cortical system of children with cochlear implants. Nevertheless, these recordings are contaminated by a cochlear implant artifact. The objective of this study was to use independent component analysis to minimize the artifact of the cochlear implant to study cortical auditory evoked potentials. Prospective study. A total of 5 children ranging in age from 21 to 49 months who were fitted with a cochlear implant for at least 6 months were included in this study. The stimuli were pure tones (750 Hz, 200 ms duration, 70 dB SPL) presented with an irregular interstimulus interval (1000-2000 ms) via loud speakers. The cortical auditory evoked potentials were recorded from 17 Ag-AgCl electrodes referenced to the nose. The peak latency and amplitude of each deflection culminating at the fronto-central and temporal sites were analyzed. The P100-N250 peak latencies and amplitudes of the cortical auditory evoked potentials recorded from children fitted with cochlear implants. Scalp map potentials distributions were done for each child for the N250 wave. The use of independent component analysis permitted to minimize the cochlear implant artifact for the five children. Cortical auditory evoked potentials were recorded at fronto-central and temporal sites. Scalp map potentials distributions for the N2 wave showed activation of temporal generators contralateral at the CI for the five children. This preliminary electrophysiological study confirms the value and the limits of independent component analysis. It could allow longitudinal studies in cochlear implant users to examine the maturation of auditory cortex. It could also be used to identify objective cortical electrophysiological measures to help the fitting of CIs in children. Copyright © 2012 Elsevier Ireland Ltd. All rights
Fitzroy, Ahren B; Krizman, Jennifer; Tierney, Adam; Agouridou, Manto; Kraus, Nina
Cross-sectional studies have demonstrated that the cortical auditory evoked potential (CAEP) changes substantially in amplitude and latency from childhood to adulthood, suggesting that these aspects of the CAEP continue to mature through adolescence. However, no study to date has longitudinally followed maturation of these CAEP measures through this developmental period. Additionally, no study has examined the trial-to-trial variability of the CAEP during adolescence. Therefore, we longitudinally tracked changes in the latency, amplitude, and variability of the P1, N1, P2, and N2 components of the CAEP in 68 adolescents from age 14 years to age 17 years. Latency decreased for N1 and N2, and did not change for P1 or P2. Amplitude decreased for P1 and N2, increased for N1, and did not change for P2. Variability decreased with age for all CAEP components. These findings provide longitudinal support for the view that the human auditory system continues to mature through adolescence. Continued auditory system maturation through adolescence suggests that CAEP neural generators remain plastic during this age range and potentially amenable to experience-based enhancement or deprivation.
Fitzroy, Ahren B.; Krizman, Jennifer; Tierney, Adam; Agouridou, Manto; Kraus, Nina
Cross-sectional studies have demonstrated that the cortical auditory evoked potential (CAEP) changes substantially in amplitude and latency from childhood to adulthood, suggesting that these aspects of the CAEP continue to mature through adolescence. However, no study to date has longitudinally followed maturation of these CAEP measures through this developmental period. Additionally, no study has examined the trial-to-trial variability of the CAEP during adolescence. Therefore, we longitudinally tracked changes in the latency, amplitude, and variability of the P1, N1, P2, and N2 components of the CAEP in 68 adolescents from age 14 years to age 17 years. Latency decreased for N1 and N2, and did not change for P1 or P2. Amplitude decreased for P1 and N2, increased for N1, and did not change for P2. Variability decreased with age for all CAEP components. These findings provide longitudinal support for the view that the human auditory system continues to mature through adolescence. Continued auditory system maturation through adolescence suggests that CAEP neural generators remain plastic during this age range and potentially amenable to experience-based enhancement or deprivation. PMID:26539092
Higgins, Nathan C.; Storace, Douglas A.; Escabí, Monty A.
Accurate orientation to sound under challenging conditions requires auditory cortex, but it is unclear how spatial attributes of the auditory scene are represented at this level. Current organization schemes follow a functional division whereby dorsal and ventral auditory cortices specialize to encode spatial and object features of sound source, respectively. However, few studies have examined spatial cue sensitivities in ventral cortices to support or reject such schemes. Here Fourier optical imaging was used to quantify best frequency responses and corresponding gradient organization in primary (A1), anterior, posterior, ventral (VAF), and suprarhinal (SRAF) auditory fields of the rat. Spike rate sensitivities to binaural interaural level difference (ILD) and average binaural level cues were probed in A1 and two ventral cortices, VAF and SRAF. Continuous distributions of best ILDs and ILD tuning metrics were observed in all cortices, suggesting this horizontal position cue is well covered. VAF and caudal SRAF in the right cerebral hemisphere responded maximally to midline horizontal position cues, whereas A1 and rostral SRAF responded maximally to ILD cues favoring more eccentric positions in the contralateral sound hemifield. SRAF had the highest incidence of binaural facilitation for ILD cues corresponding to midline positions, supporting current theories that auditory cortices have specialized and hierarchical functional organization. PMID:20980610
Mao, Yu-Ting; Hua, Tian-Miao
Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into
van Wassenhove, Virginie; Grzeczkowski, Lukasz
Active sensing has important consequences on multisensory processing (Schroeder et al., 2010). Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient color changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the “where” and the “when” of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG) while maintaining the position of their eyes on the left, right, or center of the screen. Participants counted color changes of the fixation cross while neglecting sounds which could be presented to the left, right, or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants' attention directed to visual inputs. Second, color changes elicited robust modulations of auditory cortex responses (“when” prediction) seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of “when” a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that “where” predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds. PMID:25705174
Thabet, Mirahan T; Said, Nithreen M
Cortical auditory evoked potentials are a non-invasive tool that can provide objective information on maturation of the auditory pathways. This work was designed to study the role of cortical auditory evoked potential (P1) in assessment of the benefits of amplification and aural rehabilitation in hearing impaired children. The study consisted of 31 children classified into 2 groups. Study group included 18 hearing impaired children ranging in age 4-14 years old and classified into two subgroups according to adequacy of aural rehabilitation. A control group consisted of 13 normal hearing children ranging in age from 5 to 13 years. All children were subjected to history taking, basic audiological evaluation, intelligence quotient and language assessment. Cortical auditory evoked potential (P1) was measured using synthesized speech syllable /da/ as a recording stimulus that was presented binaurally via a loudspeaker. P1 was recorded in all children with significantly prolonged latencies in hearing impaired children with inadequate rehabilitation. P1 latency was correlated to hearing loss duration in hearing impaired children with inadequate aural rehabilitation. Auditory experience was correlated with P1 latency in hearing impaired children with adequate aural rehabilitation. Cortical auditory evoked potential (P1) might provide a clinical tool to monitor aural rehabilitation outcome and to guide intervention choices. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Nagasawa, Tetsuro; Rothermel, Robert; Juhász, Csaba; Fukuda, Miho; Nishida, Masaaki; Akiyama, Tomoyuki; Sood, Sandeep; Asano, Eishi
SUMMARY Human activities often involve hand-motor responses following external auditory-verbal commands. It has been believed that hand movements are predominantly driven by the contralateral primary sensorimotor cortex, whereas auditory-verbal information is processed in both superior temporal gyri. It remains unknown whether cortical activation in the superior temporal gyrus during an auditory-motor task is affected by laterality of hand-motor responses. Here, event-related gamma-oscillations were intracranially recorded as quantitative measures of cortical activation; we determined how cortical structures were activated by auditory-cued movement using each hand in 15 patients with focal epilepsy. Auditory-verbal stimuli elicited augmentation of gamma-oscillations in a posterior portion of the superior temporal gyrus, whereas hand-motor responses elicited gamma-augmentation in the pre- and post-central gyri. The magnitudes of such gamma-augmentation in the superior temporal, pre-central and post-central gyri were significantly larger when the hand contralateral to the recorded hemisphere was required to be used for motor responses, compared to when the ipsilateral hand was. The superior temporal gyrus in each hemisphere might play a greater pivotal role when the contralateral hand needs to be used for motor responses, compared to when the ipsilateral hand does. PMID:20143383
Muller, Viktor; Gruber, Walter; Klimesch, Wolfgang; Lindenberger, Ulman
Using electroencephalographic recordings (EEG), we assessed differences in oscillatory cortical activity during auditory-oddball performance between children aged 9-13 years, younger adults, and older adults. From childhood to old age, phase synchronization increased within and between electrodes, whereas whole power and evoked power decreased. We…
Muller, Viktor; Gruber, Walter; Klimesch, Wolfgang; Lindenberger, Ulman
Using electroencephalographic recordings (EEG), we assessed differences in oscillatory cortical activity during auditory-oddball performance between children aged 9-13 years, younger adults, and older adults. From childhood to old age, phase synchronization increased within and between electrodes, whereas whole power and evoked power decreased. We…
Carcea, Ioana; Insanally, Michele N.; Froemke, Robert C.
Behavioural engagement can enhance sensory perception. However, the neuronal mechanisms by which behavioural states affect stimulus perception remain poorly understood. Here we record from single units in auditory cortex of rats performing a self-initiated go/no-go auditory task. Self-initiation transforms cortical tuning curves and bidirectionally modulates stimulus-evoked activity patterns and improves auditory detection and recognition. Trial self-initiation decreases the rate of spontaneous activity in the majority of recorded cells. Optogenetic disruption of cortical activity before and during tone presentation shows that these changes in evoked and spontaneous activity are important for sound perception. Thus, behavioural engagement can prepare cortical circuits for sensory processing by dynamically changing sound representation and by controlling the pattern of spontaneous activity. PMID:28176787
Carcea, Ioana; Insanally, Michele N; Froemke, Robert C
Behavioural engagement can enhance sensory perception. However, the neuronal mechanisms by which behavioural states affect stimulus perception remain poorly understood. Here we record from single units in auditory cortex of rats performing a self-initiated go/no-go auditory task. Self-initiation transforms cortical tuning curves and bidirectionally modulates stimulus-evoked activity patterns and improves auditory detection and recognition. Trial self-initiation decreases the rate of spontaneous activity in the majority of recorded cells. Optogenetic disruption of cortical activity before and during tone presentation shows that these changes in evoked and spontaneous activity are important for sound perception. Thus, behavioural engagement can prepare cortical circuits for sensory processing by dynamically changing sound representation and by controlling the pattern of spontaneous activity.
Herraiz, C; Diges, I; Cobo, P; Aparicio, J M
Scientific evidence has proved reorganisation processes in the auditory cortex after sensorineural hearing loss and overstimulation of certain tonotopic cortical areas, as we see in auditory conditioning techniques. Acoustic rehabilitation reduces the impact of these reorganisation changes. Recent theories explain tinnitus mechanisms as a negative consequence of neural plasticity in the central nervous system after a peripheral aggression. Auditory discrimination training (ADT) could partially reverse the wrong changes in tonotopic representation and improve tinnitus. We discuss different studies and their efficacy on tinnitus perception and annoyance. Indications, method, dose and sound strategy need to be implemented.
Chandrasekaran, Chandramouli; Turesson, Hjalmar K; Brown, Charles H; Ghazanfar, Asif A
The efficient cortical encoding of natural scenes is essential for guiding adaptive behavior. Because natural scenes and network activity in cortical circuits share similar temporal scales, it is necessary to understand how the temporal structure of natural scenes influences network dynamics in cortical circuits and spiking output. We examined the relationship between the structure of natural acoustic scenes and its impact on network activity [as indexed by local field potentials (LFPs)] and spiking responses in macaque primary auditory cortex. Natural auditory scenes led to a change in the power of the LFP in the 2-9 and 16-30 Hz frequency ranges relative to the ongoing activity. In contrast, ongoing rhythmic activity in the 9-16 Hz range was essentially unaffected by the natural scene. Phase coherence analysis showed that scene-related changes in LFP power were at least partially attributable to the locking of the LFP and spiking activity to the temporal structure in the scene, with locking extending up to 25 Hz for some scenes and cortical sites. Consistent with distributed place and temporal coding schemes, a key predictor of phase locking and power changes was the overlap between the spectral selectivity of a cortical site and the spectral structure of the scene. Finally, during the processing of natural acoustic scenes, spikes were locked to LFP phase at frequencies up to 30 Hz. These results are consistent with an idea that the cortical representation of natural scenes emerges from an interaction between network activity and stimulus dynamics.
Hong, Xiangfei; Tong, Shanbao
Auditory attentional effort (AAE) could be tuned to different levels in a top-down manner, while its neural correlates are still poorly understood. In this paper, we investigate the cortical connectivity under different levels of AAE. Multichannel EEG signals were recorded from nine subjects (male/female=6=3) in an auditory discrimination task under low or high AAE. Behavioral results showed that subjects paid more attention under high AAE and detected the probe stimuli better than low AAE. Partial directed coherence (PDC) was used to study the cortical functional connectivity within the first 300 ms post-stimulus period which includes the N100 and P200 components in the event-related potential (ERP). Majority of the cortical connections were strengthened with the increase of AAE. The right hemispheric dominance of connectivity in maintaining auditory attention was found under low AAE, which disappeared when the AAE was increased, indicating that the right hemispheric dominance previously reported might be due to a relatively lower AAE. Besides, most cortical connections under high AAE were found to be from the parietal cortex to the prefrontal cortex, which suggested the initiative role of parietal cortex in maintaining a high AAE.
Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine
Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.
Bidelman, Gavin M
Simultaneous recording of brainstem and cortical event-related brain potentials (ERPs) may offer a valuable tool for understanding the early neural transcription of behaviorally relevant sounds and the hierarchy of signal processing operating at multiple levels of the auditory system. To date, dual recordings have been challenged by technological and physiological limitations including different optimal parameters necessary to elicit each class of ERP (e.g., differential adaptation/habitation effects and number of trials to obtain adequate response signal-to-noise ratio). We investigated a new stimulus paradigm for concurrent recording of the auditory brainstem frequency-following response (FFR) and cortical ERPs. The paradigm is "optimal" in that it uses a clustered stimulus presentation and variable interstimulus interval (ISI) to (i) achieve the most ideal acquisition parameters for eliciting subcortical and cortical responses, (ii) obtain an adequate number of trials to detect each class of response, and (iii) minimize neural adaptation/habituation effects. Comparison between clustered and traditional (fixed, slow ISI) stimulus paradigms revealed minimal change in amplitude or latencies of either the brainstem FFR or cortical ERP. The clustered paradigm offered over a 3× increase in recording efficiency compared to conventional (fixed ISI presentation) and thus, a more rapid protocol for obtaining dual brainstem-cortical recordings in individual listeners. We infer that faster recording of subcortical and cortical potentials might allow more complete and sensitive testing of neurophysiological function and aid in the differential assessment of auditory function. Copyright © 2014 Elsevier B.V. All rights reserved.
Foxe, John J; Burke, Kelly M; Andrade, Gizely N; Djukic, Aleksandra; Frey, Hans-Peter; Molholm, Sophie
Over the typical course of Rett syndrome, initial language and communication abilities deteriorate dramatically between the ages of 1 and 4 years, and a majority of these children go on to lose all oral communication abilities. It becomes extremely difficult for clinicians and caretakers to accurately assess the level of preserved auditory functioning in these children, an issue of obvious clinical import. Non-invasive electrophysiological techniques allow for the interrogation of auditory cortical processing without the need for overt behavioral responses. In particular, the mismatch negativity (MMN) component of the auditory evoked potential (AEP) provides an excellent and robust dependent measure of change detection and auditory sensory memory. Here, we asked whether females with Rett syndrome would produce the MMN to occasional changes in pitch in a regularly occurring stream of auditory tones. Fourteen girls with genetically confirmed Rett syndrome and 22 age-matched neurotypical controls participated (ages 3.9-21.1 years). High-density electrophysiological recordings from 64 scalp electrodes were made while participants passively listened to a regularly occurring stream of 503-Hz auditory tone pips that was occasionally (15 % of presentations) interrupted by a higher-pitched deviant tone of 996 Hz. The MMN was derived by subtracting the AEP to these deviants from the AEP produced to the standard. Despite clearly anomalous morphology and latency of the AEP to simple pure-tone inputs in Rett syndrome, the MMN response was evident in both neurotypicals and Rett patients. However, we found that the pitch-evoked MMN was both delayed and protracted in duration in Rett, pointing to slowing of auditory responsiveness. The presence of the MMN in Rett patients suggests preserved abilities to process pitch changes in auditory sensory memory. This work represents a beginning step in an effort to comprehensively map the extent of auditory cortical functioning in Rett
Atencio, Craig A.; Sharpee, Tatyana O.; Schreiner, Christoph E.
Sensory cortical anatomy has identified a canonical microcircuit underlying computations between and within layers. This feed-forward circuit processes information serially from granular to supragranular and to infragranular layers. How this substrate correlates with an auditory cortical processing hierarchy is unclear. We recorded simultaneously from all layers in cat primary auditory cortex (AI) and estimated spectrotemporal receptive fields (STRFs) and associated nonlinearities. Spike-triggered averaged STRFs revealed that temporal precision, spectrotemporal separability, and feature selectivity varied with layer according to a hierarchical processing model. STRFs from maximally informative dimension (MID) analysis confirmed hierarchical processing. Of two cooperative MIDs identified for each neuron, the first comprised the majority of stimulus information in granular layers. Second MID contributions and nonlinear cooperativity increased in supragranular and infragranular layers. The AI microcircuit provides a valid template for three independent hierarchical computation principles. Increases in processing complexity, STRF cooperativity, and nonlinearity correlate with the synaptic distance from granular layers. PMID:19918079
Renvall, Hanna; Staeren, Noël; Barz, Claudia S; Ley, Anke; Formisano, Elia
This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in
Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia
This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in
Lopez-Soto, Teresa; Postigo-Madueno, Amparo; Nunez-Abades, Pedro
In centrally related hearing loss, there is no apparent damage in the auditory system, but the patient is unable to hear sounds. In patients with cortical hearing loss (and in the absence of communication deficit, either total or partial, as in agnosia or aphasia), some attention-related or language-based disorders may lead to a wrong diagnosis of hearing impairment. The authors present two patients (8 and 11 years old) with no anatomical damage to the ear, the absence of neurological damage or trauma, but immature cortical auditory evoked potentials. Both patients presented a clinical history of multiple diagnoses over several years. Because the most visible symptom was moderate hearing loss, the patients were recurrently referred to audiological testing, with no improvement. This report describes the use of long-latency evoked potentials to determine cases of cortical hearing loss, where hearing impairment is a consequence of underdevelopment at the central nervous system. PMID:27006780
Billings, Curtis J; McMillan, Garnett P; Penman, Tina M; Gille, Sun Mi
Speech perception in background noise is a common challenge across individuals and health conditions (e.g., hearing impairment, aging, etc.). Both behavioral and physiological measures have been used to understand the important factors that contribute to perception-in-noise abilities. The addition of a physiological measure provides additional information about signal-in-noise encoding in the auditory system and may be useful in clarifying some of the variability in perception-in-noise abilities across individuals. Fifteen young normal-hearing individuals were tested using both electrophysiology and behavioral methods as a means to determine (1) the effects of signal-to-noise ratio (SNR) and signal level and (2) how well cortical auditory evoked potentials (CAEPs) can predict perception in noise. Three correlation/regression approaches were used to determine how well CAEPs predicted behavior. Main effects of SNR were found for both electrophysiology and speech perception measures, while signal level effects were found generally only for speech testing. These results demonstrate that when signals are presented in noise, sensitivity to SNR cues obscures any encoding of signal level cues. Electrophysiology and behavioral measures were strongly correlated. The best physiological predictors (e.g., latency, amplitude, and area of CAEP waves) of behavior (SNR at which 50 % of the sentence is understood) were N1 latency and N1 amplitude measures. In addition, behavior was best predicted by the 70-dB signal/5-dB SNR CAEP condition. It will be important in future studies to determine the relationship of electrophysiology and behavior in populations who experience difficulty understanding speech in noise such as those with hearing impairment or age-related deficits.
Pearce, Wendy; Golding, Maryanne; Dillon, Harvey
Infants with auditory neuropathy and possible hearing impairment are being identified at very young ages through the implementation of hearing screening programs. The diagnosis is commonly based on evidence of normal cochlear function but abnormal brainstem function. This lack of normal brainstem function is highly problematic when prescribing amplification in young infants because prescriptive formulae require the input of hearing thresholds that are normally estimated from auditory brainstem responses to tonal stimuli. Without this information, there is great uncertainty surrounding the final fitting. Cortical auditory evoked potentials may, however, still be evident and reliably recorded to speech stimuli presented at conversational levels. The case studies of two infants are presented that demonstrate how these higher order electrophysiological responses may be utilized in the audiological management of some infants with auditory neuropathy.
Bach, Adám; Tóth, Ferenc; Matievics, Vera; Kiss, József Géza; Jóri, József; Szakál, Beáta; Balogh, Norbert; Soós, Alexandra; Rovó, László
Cortical auditory evoked potentials can provide objective information about the highest level of the auditory system. The purpose of the authors was to introduce a new tool, the "HEARLab" which can be routinely used in clinical practice for the measurement of the cortical auditory evoked potentials. In addition, they wanted to establish standards of the analyzed parameters in subjects with normal hearing. 25 adults with normal hearing were tested with speech stimuli, and frequency specific examinations were performed utilizing pure tone stimuli. The findings regarding the latency and amplitude analyses of the evoked potentials confirm previously published results of this novel method. The HEARLAb can be a great help when performance of the conventional audiological examinations is complicated. The examination can be performed in uncooperative subjects even in the presence of hearing aids. The test is frequency specific and does not require anesthesia.
Boekhoff-Falk, Grace; Eberl, Daniel F.
Development of a functional auditory system in Drosophila requires specification and differentiation of the chordotonal sensilla of Johnston’s organ (JO) in the antenna, correct axonal targeting to the antennal mechanosensory and motor center (AMMC) in the brain, and synaptic connections to neurons in the downstream circuit. Chordotonal development in JO is functionally complicated by structural, molecular and functional diversity that is not yet fully understood, and construction of the auditory neural circuitry is only beginning to unfold. Here we describe our current understanding of developmental and molecular mechanisms that generate the exquisite functions of the Drosophila auditory system, emphasizing recent progress and highlighting important new questions arising from research on this remarkable sensory system. PMID:24719289
Polat, Zahra; Ataş, Ahmet
In the literature, music education has been shown to enhance auditory perception for children and young adults. When compared to young adult non-musicians, young adult musicians demonstrate increased auditory processing, and enhanced sensitivity to acoustic changes. The evoked response potentials associated with the interpretation of sound are enhanced in musicians. Studies show that training also changes sound perception and cortical responses. The earlier training appears to lead to larger changes in the auditory cortex. Most cortical studies in the literature have used pure tones or musical instrument sounds as stimuli signals. The aim of those studies was to investigate whether musical education would enhance auditory cortical responses when speech signals were used. In this study, the speech sounds extracted from running speech were used as sound stimuli. Non-randomized controlled study. The experimental group consists of young adults up to 21 years-old, all with a minimum of 4 years of musical education. The control group was selected from young adults of the same age without any musical education. The experiments were conducted by using a cortical evoked potential analyser and /m/, /t/ /g/ sound stimulation at the level of 65 dB SPL. In this study, P1 / N1 / P2 amplitude and latency values were measured. Significant differences were found in the amplitude values of P1 and P2 (p<0.05). The differences among the latencies were not found to be significantly important (p>0.05). The results obtained in our study indicate that musical experience has an effect on the nervous system and this can be seen in cortical auditory evoked potentials recorded when the subjects hear speech.
Todd, Neil P M; Paillard, Aurore C; Kluk, Karolina; Whittle, Elizabeth; Colebatch, James G
Acoustic sensitivity of the vestibular apparatus is well-established, but the contribution of vestibular receptors to the late auditory evoked potentials of cortical origin is unknown. Evoked potentials from 500 Hz tone pips were recorded using 70 channel EEG at several intensities below and above the vestibular acoustic threshold, as determined by vestibular evoked myogenic potentials (VEMPs). In healthy subjects both auditory mid- and long-latency auditory evoked potentials (AEPs), consisting of Na, Pa, N1 and P2 waves, were observed in the sub-threshold conditions. However, in passing through the vestibular threshold, systematic changes were observed in the morphology of the potentials and in the intensity dependence of their amplitude and latency. These changes were absent in a patient without functioning vestibular receptors. In particular, for the healthy subjects there was a fronto-central negativity, which appeared at about 42 ms, referred to as an N42, prior to the AEP N1. Source analysis of both the N42 and N1 indicated involvement of cingulate cortex, as well as bilateral superior temporal cortex. Our findings are best explained by vestibular receptors contributing to what were hitherto considered as purely auditory evoked potentials and in addition tentatively identify a new component that appears to be primarily of vestibular origin.
Friederici, Angela D
Over the years, a large body of work on the brain basis of language comprehension has accumulated, paving the way for the formulation of a comprehensive model. The model proposed here describes the functional neuroanatomy of the different processing steps from auditory perception to comprehension as located in different gray matter brain regions. It also specifies the information flow between these regions, taking into account white matter fiber tract connections. Bottom-up, input-driven processes proceeding from the auditory cortex to the anterior superior temporal cortex and from there to the prefrontal cortex, as well as top-down, controlled and predictive processes from the prefrontal cortex back to the temporal cortex are proposed to constitute the cortical language circuit.
He, Shuman; Holly, F.B. Teagle; Ewend, Matthew; Henderson, Lillian; Buchman, Craig A.
Objective This study explored the feasibility of measuring electrically-evoked cortical auditory event-related potentials (eERPs) in children with auditory brainstem implants (ABIs). Design Five children with unilateral ABIs ranging in age from2.8 to 10.2yrs (mean: 5.2yrs) participated in this study. The stimulus was a 100-ms biphasic pulse train that was delivered to individual electrodes in a monopolar stimulation mode. Electrophysiological recordings of the onset eERP were conducted in all subjects. Results The onset eERP was recorded in four subjects who demonstrated auditory perception. These eERP responses showed variations in waveform morphology across subjects and stimulating electrode locations. No eERPs were observed in one subject who received no auditory sensation from ABI stimulation. Conclusions eERPs can be recorded in children with ABIs who develop auditory perception. The morphology of the eERP can vary across subjects and also across stimulating electrode locations within subjects. PMID:25426662
Dimitrijevic, Andrew; Michalewski, Henry J.; Zeng, Fan-Gang; Pratt, Hillel; Starr, Arnold
Objective We examined auditory cortical potentials in normal hearing subjects to spectral changes in continuous low and high frequency pure tones. Methods Cortical potentials were recorded to increments of frequency from continuous 250 Hz or 4000 Hz tones. The magnitude of change was random and varied from 0% to 50% above the base frequency. Results Potentials consisted of N100, P200 and a slow negative wave (SN). N100 amplitude, latency and dipole magnitude with frequency increments were significantly greater for low compared to high frequencies. Dipole amplitudes were greater in the right than left hemisphere for both base frequencies. The SN amplitude to frequency changes between 4 to 50% was not significantly related to the magnitude of spectral change. Conclusions Modulation of N100 amplitude and latency elicited by spectral change is more pronounced with low compared to high frequencies. Significance These data provide electrophysiological evidence that central processing of spectral changes in the cortex differs for low and high frequencies. Some of these differences may be related to both temporal- and spectral-based coding at the auditory periphery. Central representation of frequency change may be related to the different temporal windows of integration across frequencies. PMID:18635394
Strauß, Antje; Wöstmann, Malte; Obleser, Jonas
Listening to speech is often demanding because of signal degradations and the presence of distracting sounds (i.e., “noise”). The question how the brain achieves the task of extracting only relevant information from the mixture of sounds reaching the ear (i.e., “cocktail party problem”) is still open. In analogy to recent findings in vision, we propose cortical alpha (~10 Hz) oscillations measurable using M/EEG as a pivotal mechanism to selectively inhibit the processing of noise to improve auditory selective attention to task-relevant signals. We review initial evidence of enhanced alpha activity in selective listening tasks, suggesting a significant role of alpha-modulated noise suppression in speech. We discuss the importance of dissociating between noise interference in the auditory periphery (i.e., energetic masking) and noise interference with more central cognitive aspects of speech processing (i.e., informational masking). Finally, we point out the adverse effects of age-related hearing loss and/or cognitive decline on auditory selective inhibition. With this perspective article, we set the stage for future studies on the inhibitory role of alpha oscillations for speech processing in challenging listening situations. PMID:24904385
Fitzpatrick, K A; Imig, T J
Two tonotopically organized cortical fields, the primary (A1) and rostral (R) fields, comprise a core of auditory cortex in the owl monkey. Injections of tritiated proline were made into each of these fields to determine their projections to the auditory fields in the ipsilateral and contralateral hemispheres using autoradiographic methods. Neurons in R project to the rostromedial (RM) and primary fields in both hemispheres, and to the posterolateral (PL) and anterolateral (AL) fields in the ipsilateral hemisphere. In addition, the rostral fields in the two hemispheres are connected. Neurons in the primary field project to RM and R in both hemispheres and to AL, Pl, and the caudomedial (CM) field in the ipsilateral hemisphere. The primary fields in the two hemispheres are connected. Single injections into A1 and R often result in labeling of two or more columns of tissue in the ipsilateral and contralateral target fields. Cortico-cortical axon terminations are concentrated in layer IV of fields AL and RM and in upper layer III and layer IV of R and CM. In A1, axon terminals of neurons whose cell bodies lie in A1 in the opposite hemisphere are concentrated in upper layer III and layer IV; axon terminals of neurons located in field R of the same hemispheres are concentrated in layers I and II. Layer IV of Pl contains the greatest concentration of cortico-cortical axon terminals; the supragranular layers contain a somewhat lower concentration. Neurons in R project contralaterally in the anterior commissure while A1 neurons send their axons contralaterally in the corpus callosum.
Guenther, Frank H.; Tourville, Jason A.; Bohland, Jason W.
Many studies have shown that sounds from near the center of a sound category (such as a phoneme from one's native language) are more difficult to discriminate from each other than sounds from near a category boundary. However, the neural processes underlying this phenomenon are not yet clearly understood. In this talk we describe neural models that have been developed to address experimental data from psychophysical and functional brain imaging experiments investigating sound representations in the cortex. Experiments investigating the effects of categorization and discrimination training with nonspeech sounds indicate that different training tasks have different effects on sound discriminability: discrimination training increases the discriminability of the training sounds, whereas learning a new sound category decreases the discriminability of the training sounds within the category. These results can be accounted for by a neural model in which categorization training causes a decrease in the size of the cortical representation of central sounds in the category, while discrimination training leads to an increase in the cortical representation of training sounds. This model is supported by brain imaging results for speech and nonspeech sounds. Experimental results further suggest preferential utilization of different auditory cortical regions when subjects perform identification versus discrimination tasks.
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity.
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
Objective: To define brain activity corresponding to an auditory illusion of 3 and 6 Hz binaural beats in 250 Hz or 1,000 Hz base frequencies, and compare it to the sound onset response. Methods: Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000 Hz to one ear and 3 or 6 Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3 Hz and 6 Hz, in base frequencies of 250 Hz and 1000 Hz. Tones were 2,000 ms in duration and presented with approximately 1 s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. Results: All stimuli evoked tone-onset P50, N100 and P200 components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P50 had significantly different sources than the beats-evoked oscillations; and N100 and P200 sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Conclusions: Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Significance: Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the
Schneider, U; Schleussner, E; Haueisen, J; Nowak, H; Seewald, H J
Magnetoencephalography (MEG) using auditory evoked cortical fields (AEF) is an absolutely non-invasive method of passive measurement which utilizes magnetic fields caused by specific cortical activity. By applying the exceptionally sensitive SQUID technology to record these fields of dipolar configuration produced by the fetal brain, MEG as an investigational tool could provide new insights into the development of the human brain in utero. The major constraint to this application is a very low signal-to-noise ratio (SNR) that has to be attributed to a variety of factors including the magnetic signals generated by the fetal and maternal hearts which inevitably obscure a straightforward signal analysis. By applying a new algorithm of specific heart artefact reduction based on the relative regularity of the heart signals, we were able to increase the chance of extracting a fetal AEF from the raw data by the means of averaging techniques and principle component analysis. Results from 27 pregnant, healthy women (third trimester of their uncomplicated pregnancy) indicate an improved detection rate and the reproducibility of the fetal MEG. We evaluate and discuss a-priori criteria for signal analyses which will enable us to systematically analyze additional limiting factors, to further enhance the efficiency of this method and to promote the assessment of its possible clinical value in the future.
Shaw, N A
Many methods are employed in order to define more precisely the generators of an evoked potential (EP) waveform. One technique is to compare the timing of an EP whose origin is well established with that of one whose origin is less certain. In the present article, the latency of the primary cortical auditory evoked potential (PCAEP) was compared to each of the seven subcomponents which compose the brainstem auditory evoked potential (BAEP). The data for this comparison was derived from a retrospective analysis of previous recordings of the PCAEP and BAEP. Central auditory conduction time (CACT) was calculated by subtracting the latency of the cochlear nucleus BAEP component (wave III) from that of the PCAEP. It was found that CACT in humans is 12 msec which is more than double that of central somatosensory conduction time. The interpeak latencies between BAEP waves V, VI, and VII and the PCAEP were also calculated. It was deduced that all three waves must have an origin rather more caudally within the central auditory system than is commonly supposed. In addition, it is demonstrated that the early components of the middle latency AEP (No and Na) largely reside within the time domain between the termination of the BAEP components and the PCAEP which would be consistent with their being far field reflections of midbrain and subcortical auditory activity. It is concluded that as the afferent volley ascends the central auditory pathways, it generates not a sequence of high frequency BAEP responses but rather a succession of slower post-synaptic waves. The only means of reconciling the timing of the BAEP waves with that of the PCAEP is to assume that the generation of all the BAEP components must be largely restricted to a quite confined region within the auditory nerve and the lower half of the pons.
Guéguin, Marie; Le Bouquin-Jeannès, Régine; Faucon, Gérard; Chauvel, Patrick; Liégeois-Chauvel, Catherine
The human auditory cortex includes several interconnected areas. A better understanding of the mechanisms involved in auditory cortical functions requires a detailed knowledge of neuronal connectivity between functional cortical regions. In human, it is difficult to track in vivo neuronal connectivity. We investigated the interarea connection in vivo in the auditory cortex using a method of directed coherence (DCOH) applied to depth auditory evoked potentials (AEPs). This paper presents simultaneous AEPs recordings from insular gyrus (IG), primary and secondary cortices (Heschl's gyrus and planum temporale), and associative areas (Brodmann area [BA] 22) with multilead intracerebral electrodes in response to sinusoidal modulated white noises in 4 epileptic patients who underwent invasive monitoring with depth electrodes for epilepsy surgery. DCOH allowed estimation of the causality between 2 signals recorded from different cortical sites. The results showed 1) a predominant auditory stream within the primary auditory cortex from the most medial region to the most lateral one whatever the modulation frequency, 2) unidirectional functional connection from the primary to secondary auditory cortex, 3) a major auditory propagation from the posterior areas to the anterior ones, particularly at 8, 16, and 32 Hz, and 4) a particular role of Heschl's sulcus dispatching information to the different auditory areas. These findings suggest that cortical processing of auditory information is performed in serial and parallel streams. Our data showed that the auditory propagation could not be associated to a unidirectional traveling wave but to a constant interaction between these areas that could reflect the large adaptive and plastic capacities of auditory cortex. The role of the IG is discussed.
Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal
Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.
Coffey, Emily B J; Musacchia, Gabriella; Zatorre, Robert J
The frequency-following response (FFR) is a measure of the brain's periodic sound encoding. It is of increasing importance for studying the human auditory nervous system due to numerous associations with auditory cognition and dysfunction. Although the FFR is widely interpreted as originating from brainstem nuclei, a recent study using MEG suggested that there is also a right-lateralized contribution from the auditory cortex at the fundamental frequency (Coffey et al., 2016b). Our objectives in the present work were to validate and better localize this result using a completely different neuroimaging modality and to document the relationships between the FFR, the onset response, and cortical activity. Using a combination of EEG, fMRI, and diffusion-weighted imaging, we show that activity in the right auditory cortex is related to individual differences in FFR-fundamental frequency (f0) strength, a finding that was replicated with two independent stimulus sets, with and without acoustic energy at the fundamental frequency. We demonstrate a dissociation between this FFR-f0-sensitive response in the right and an area in left auditory cortex that is sensitive to individual differences in the timing of initial response to sound onset. Relationships to timing and their lateralization are supported by parallels in the microstructure of the underlying white matter, implicating a mechanism involving neural conduction efficiency. These data confirm that the FFR has a cortical contribution and suggest ways in which auditory neuroscience may be advanced by connecting early sound representation to measures of higher-level sound processing and cognitive function. The frequency-following response (FFR) is an EEG signal that is used to explore how the auditory system encodes temporal regularities in sound and is related to differences in auditory function between individuals. It is known that brainstem nuclei contribute to the FFR, but recent findings of an additional cortical
Mock, Jeffrey R.; Seay, Michael J.; Charney, Danielle R.; Holmes, John L.; Golob, Edward J.
Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120–200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models. PMID:26082679
Mock, Jeffrey R; Seay, Michael J; Charney, Danielle R; Holmes, John L; Golob, Edward J
Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120-200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models.
Agung, Katrina; Purdy, Suzanne C; McMahon, Catherine M; Newall, Philip
There has been considerable recent interest in the use of cortical auditory evoked potentials (CAEPs) as an electrophysiological measure of human speech encoding in individuals with normal as well as impaired auditory systems. The development of such electrophysiological measures such as CAEPs is important because they can be used to evaluate the benefits of hearing aids and cochlear implants in infants, young children, and adults that cannot cooperate for behavioral speech discrimination testing. The current study determined whether CAEPs produced by seven different speech sounds, which together cover a broad range of frequencies across the speech spectrum, could be differentiated from each other based on response latency and amplitude measures. CAEPs were recorded from ten adults with normal hearing in response to speech stimuli presented at a conversational level (65 dB SPL) via a loudspeaker. Cortical responses were reliably elicited by each of the speech sounds in all participants. CAEPs produced by speech sounds dominated by high-frequency energy were significantly different in amplitude from CAEPs produced by sounds dominated by lower-frequency energy. Significant effects of stimulus duration were also observed, with shorter duration stimuli producing larger amplitudes and earlier latencies than longer duration stimuli. This research demonstrates that CAEPs can be reliably evoked by sounds that encompass the entire speech frequency range. Further, CAEP latencies and amplitudes may provide an objective indication that spectrally different speech sounds are encoded differently at the cortical level.
Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.
Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it
Brancucci, A; Babiloni, C; Vecchio, F; Galderisi, S; Mucci, A; Tecchio, F; Romani, G L; Rossini, P M
The present study focused on functional coupling between human bilateral auditory cortices and on possible influence of right over left auditory cortex during dichotic listening of complex non-verbal tones having near (competing) compared with distant non-competing fundamental frequencies. It was hypothesized that dichotic stimulation with competing tones would induce a decline of functional coupling between the two auditory cortices, as revealed by a decrease of electroencephalography coherence and an increase of directed transfer function from right (specialized for the present stimulus material) to left auditory cortex. Electroencephalograph was recorded from T3 and T4 scalp sites, overlying respectively left and right auditory cortices, and from Cz scalp site (vertex) for control purposes. Event-related coherence between T3 and T4 scalp sites was significantly lower for all electroencephalography bands of interest during dichotic listening of competing than non-competing tone pairs. This was a specific effect, since event-related coherence did not differ in a monotic control condition. Furthermore, event-related coherence between T3 and Cz and between T4 and Cz scalp sites showed no significant effects. Conversely, the directed transfer function results showed negligible influence at group level of right over left auditory cortex during dichotic listening. These results suggest a decrease of functional coupling between bilateral auditory cortices during competing dichotic stimuli as a possible neural substrate for the lateralization of auditory stimuli during dichotic listening.
O'Brien, Jennifer L; Nikjeh, Dee A; Lister, Jennifer J
The goal of this study was to begin to explore whether the beneficial auditory neural effects of early music training persist throughout life and influence age-related changes in neurophysiological processing of sound. Cortical auditory evoked potentials (CAEPs) elicited by harmonic tone complexes were examined, including P1-N1-P2, mismatch negativity (MMN), and P3a. Data from older adult musicians (n = 8) and nonmusicians (n = 8) (ages 55-70 years) were compared to previous data from young adult musicians (n = 40) and nonmusicians (n = 20) (ages 18-33 years). P1-N1-P2 amplitudes and latencies did not differ between older adult musicians and nonmusicians; however, MMN and P3a latencies for harmonic tone deviances were earlier for older musicians than older nonmusicians. Comparisons of P1-N1-P2, MMN, and P3a components between older and young adult musicians and nonmusicians suggest that P1 and P2 latencies are significantly affected by age, but not musicianship, while MMN and P3a appear to be more sensitive to effects of musicianship than aging. Findings support beneficial influences of musicianship on central auditory function and suggest a positive interaction between aging and musicianship on the auditory neural system.
Lin, Frank G.; Galindo-Leon, Edgar E.; Ivanova, Tamara N.; Mappus, Rudolph C.; Liu, Robert C.
A growing interest in sensory system plasticity in the natural context of motherhood has created the need to investigate how intrinsic physiological state (e.g., hormonal, motivational, etc.) interacts with sensory experience to drive adaptive cortical plasticity for behaviorally relevant stimuli. Using a maternal mouse model of auditory cortical inhibitory plasticity for ultrasonic pup calls, we examined the role of pup care versus maternal physiological state in the long-term retention of this plasticity. Very recent experience caring for pups by Early Cocarers, which are virgins, produced stronger call-evoked lateral-band inhibition in auditory cortex. However, this plasticity was absent when measured post-weaning in Cocarers, even though it was present at the same time point in Mothers, whose pup experience occurred under a maternal physiological state. A two-alternative choice phonotaxis task revealed that the same animal groups (Early Cocarers and Mothers) demonstrating stronger lateral-band inhibition also preferred pup calls over a neutral sound, a correlation consistent with the hypothesis that this inhibitory mechanism may play a mnemonic role and is engaged to process sounds that are particularly salient. Our electrophysiological data hints at a possible mechanism through which the maternal physiological state may act to preserve the cortical plasticity: selectively suppressing detrimental spontaneous activity in neurons that are responsive to calls, an effect observed only in Mothers. Taken together, the maternal physiological state during the care of pups may help maintain the memory trace of behaviorally salient infant cues within core auditory cortex, potentially ensuring a more rapid induction of future maternal behavior. PMID:23707982
Kayser, Christoph; Logothetis, Nikos K.
Recent studies using functional imaging and electrophysiology demonstrate that processes related to sensory integration are not restricted to higher association cortices but already occur in early sensory cortices, such as primary auditory cortex. While anatomical studies suggest the superior temporal sulcus (STS) as likely source of visual input to auditory cortex, little evidence exists to support this notion at the functional level. Here we tested this hypothesis by simultaneously recording from sites in auditory cortex and STS in alert animals stimulated with dynamic naturalistic audio–visual scenes. Using Granger causality and directed transfer functions we first quantified causal interactions at the level of field potentials, and subsequently determined those frequency bands that show effective interactions, i.e. interactions that are relevant for influencing neuronal firing at the target site. We found that effective interactions from auditory cortex to STS prevail below 20 Hz, while interactions from STS to auditory cortex prevail above 20 Hz. In addition, we found that directed interactions from STS to auditory cortex make a significant contribution to multisensory influences in auditory cortex: Sites in auditory cortex showing multisensory enhancement received stronger feed-back from STS during audio–visual than during auditory stimulation, while sites with multisensory suppression received weaker feed-back. These findings suggest that beta frequencies might be important for inter-areal coupling in the temporal lobe and demonstrate that superior temporal regions indeed provide one major source of visual influences to auditory cortex. PMID:19503750
Slugocki, Christopher; Bosnyak, Daniel; Trainor, Laurel J
Recent electrophysiological work has evinced a capacity for plasticity in subcortical auditory nuclei in human listeners. Similar plastic effects have been measured in cortically-generated auditory potentials but it is unclear how the two interact. Here we present Simultaneously-Evoked Auditory Potentials (SEAP), a method designed to concurrently elicit electrophysiological brain potentials from inferior colliculus, thalamus, and primary and secondary auditory cortices. Twenty-six normal-hearing adult subjects (mean 19.26 years, 9 male) were exposed to 2400 monaural (right-ear) presentations of a specially-designed stimulus which consisted of a pure-tone carrier (500 or 600 Hz) that had been amplitude-modulated at the sum of 37 and 81 Hz (depth 100%). Presentation followed an oddball paradigm wherein the pure-tone carrier was set to 500 Hz for 85% of presentations and pseudo-randomly changed to 600 Hz for the remaining 15% of presentations. Single-channel electroencephalographic data were recorded from each subject using a vertical montage referenced to the right earlobe. We show that SEAP elicits a 500 Hz frequency-following response (FFR; generated in inferior colliculus), 80 (subcortical) and 40 (primary auditory cortex) Hz auditory steady-state responses (ASSRs), mismatch negativity (MMN) and P3a (when there is an occasional change in carrier frequency; secondary auditory cortex) in addition to the obligatory N1-P2 complex (secondary auditory cortex). Analyses showed that subcortical and cortical processes are linked as (i) the latency of the FFR predicts the phase delay of the 40 Hz steady-state response, (ii) the phase delays of the 40 and 80 Hz steady-state responses are correlated, and (iii) the fidelity of the FFR predicts the latency of the N1 component. The SEAP method offers a new approach for measuring the dynamic encoding of acoustic features at multiple levels of the auditory pathway. As such, SEAP is a promising tool with which to study how
Wisniewski, Matthew G.; Mercado, Eduardo; Gramann, Klaus; Makeig, Scott
Several acoustic cues contribute to auditory distance estimation. Nonacoustic cues, including familiarity, may also play a role. We tested participants’ ability to distinguish the distances of acoustically similar sounds that differed in familiarity. Participants were better able to judge the distances of familiar sounds. Electroencephalographic (EEG) recordings collected while participants performed this auditory distance judgment task revealed that several cortical regions responded in different ways depending on sound familiarity. Surprisingly, these differences were observed in auditory cortical regions as well as other cortical regions distributed throughout both hemispheres. These data suggest that learning about subtle, distance-dependent variations in complex speech sounds involves processing in a broad cortical network that contributes both to speech recognition and to how spatial information is extracted from speech. PMID:22911734
Farley, Brandon J; Noreña, Arnaud J
How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts.
Raij, Tommi; Ahveninen, Jyrki; Lin, Fa-Hsuan; Witzel, Thomas; Jääskeläinen, Iiro P; Letham, Benjamin; Israeli, Emily; Sahyoun, Cherif; Vasios, Christos; Stufflebeam, Steven; Hämäläinen, Matti; Belliveau, John W
Here we report early cross-sensory activations and audiovisual interactions at the visual and auditory cortices using magnetoencephalography (MEG) to obtain accurate timing information. Data from an identical fMRI experiment were employed to support MEG source localization results. Simple auditory and visual stimuli (300-ms noise bursts and checkerboards) were presented to seven healthy humans. MEG source analysis suggested generators in the auditory and visual sensory cortices for both within-modality and cross-sensory activations. fMRI cross-sensory activations were strong in the visual but almost absent in the auditory cortex; this discrepancy with MEG possibly reflects the influence of acoustical scanner noise in fMRI. In the primary auditory cortices (Heschl's gyrus) the onset of activity to auditory stimuli was observed at 23 ms in both hemispheres, and to visual stimuli at 82 ms in the left and at 75 ms in the right hemisphere. In the primary visual cortex (Calcarine fissure) the activations to visual stimuli started at 43 ms and to auditory stimuli at 53 ms. Cross-sensory activations thus started later than sensory-specific activations, by 55 ms in the auditory cortex and by 10 ms in the visual cortex, suggesting that the origins of the cross-sensory activations may be in the primary sensory cortices of the opposite modality, with conduction delays (from one sensory cortex to another) of 30-35 ms. Audiovisual interactions started at 85 ms in the left auditory, 80 ms in the right auditory and 74 ms in the visual cortex, i.e., 3-21 ms after inputs from the two modalities converged.
Raij, Tommi; Ahveninen, Jyrki; Lin, Fa-Hsuan; Witzel, Thomas; Jääskeläinen, Iiro P.; Letham, Benjamin; Israeli, Emily; Sahyoun, Cherif; Vasios, Christos; Stufflebeam, Steven; Hämäläinen, Matti; Belliveau, John W.
Here we report early cross-sensory activations and audiovisual interactions at the visual and auditory cortices using magnetoencephalography (MEG) to obtain accurate timing information. Data from an identical fMRI experiment were employed to support MEG source localization results. Simple auditory and visual stimuli (300-ms noise bursts and checkerboards) were presented to seven healthy humans. MEG source analysis suggested generators in the auditory and visual sensory cortices for both within-modality and cross-sensory activations. fMRI cross-sensory activations were strong in the visual but almost absent in the auditory cortex; this discrepancy with MEG possibly reflects influence of acoustical scanner noise in fMRI. In the primary auditory cortices (Heschl’s gyrus) onset of activity to auditory stimuli was observed at 23 ms in both hemispheres, and to visual stimuli at 82 ms in the left and at 75 ms in the right hemisphere. In the primary visual cortex (Calcarine fissure) the activations to visual stimuli started at 43 ms and to auditory stimuli at 53 ms. Cross-sensory activations thus started later than sensory-specific activations, by 55 ms in the auditory cortex and by 10 ms in the visual cortex, suggesting that the origins of the cross-sensory activations may be in the primary sensory cortices of the opposite modality, with conduction delays (from one sensory cortex to another) of 30–35 ms. Audiovisual interactions started at 85 ms in the left auditory, 80 ms in the right auditory, and 74 ms in the visual cortex, i.e., 3–21 ms after inputs from both modalities converged. PMID:20584181
Meyer, Martin; Liem, Franziskus; Hirsiger, Sarah; Jäncke, Lutz; Hänggi, Jürgen
This investigation provides an analysis of structural asymmetries in 5 anatomically defined regions (Heschl's gyrus, HG; Heschl's sulcus, HS; planum temporale, PT; planum polare, PP; superior temporal gyrus, STG) within the human auditory-related cortex. Volumetric 3-dimensional T1-weighted magnetic resonance imaging scans were collected from 104 participants (52 males). Cortical volume (CV), cortical thickness (CT), and cortical surface area (CSA) were calculated based on individual scans of these anatomical traits. This investigation demonstrates a leftward asymmetry for CV and CSA that is observed in the HG, STG, and PT regions. As regards CT, we note a rightward asymmetry in the HG and HS. A correlation analysis of asymmetry indices between measurements for distinct regions of interest (ROIs) yields significant correlations between CT and CV in 4 of 5 ROIs (HG, HS, PT, and STG). Significant correlation values between CSA and CV are observed for all 5 ROIs. The findings suggest that auditory-related cortical areas demonstrate larger leftward asymmetry with respect to the CSA, while a clear rightward asymmetry with respect to CT is salient in both the primary and the secondary auditory cortex only. In addition, we propose that CV is not an ideal neuromarker for anatomical measurements. CT and CSA should be considered independent traits of anatomical asymmetries in the auditory-related cortex.
Baltzell, Lucas S; Billings, Curtis J
The purpose of this study was to determine the effects of SNR and signal level on the offset response of the cortical auditory evoked potential (CAEP). Successful listening often depends on how well the auditory system can extract target signals from competing background noise. Both signal onsets and offsets are encoded neurally and contribute to successful listening in noise. Neural onset responses to signals in noise demonstrate a strong sensitivity to signal-to-noise ratio (SNR) rather than signal level; however, the sensitivity of neural offset responses to these cues is not known. We analyzed the offset response from two previously published datasets for which only the onset response was reported. For both datasets, CAEPs were recorded from young normal-hearing adults in response to a 1000-Hz tone. For the first dataset, tones were presented at seven different signal levels without background noise, while the second dataset varied both signal level and SNR. Offset responses demonstrated sensitivity to absolute signal level in quiet, SNR, and to absolute signal level in noise. Offset sensitivity to signal level when presented in noise contrasts with previously published onset results. This sensitivity suggests a potential clinical measure of cortical encoding of signal level in noise.
Scott, Brian H; Leccese, Paul A; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Mullarkey, Matthew P; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C
In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex.
Yuvaraj, Pradeep; Mannarukrishnaiah, Jayaram
The purpose of the present study was to investigate the relationship between cortical processing of speech and benefit from hearing aids in individuals with auditory dys-synchrony. Data were collected from 38 individuals with auditory dys-synchrony. Participants were selected based on hearing thresholds, middle ear reflexes, otoacoustic emissions, and auditory brain stem responses. Cortical-evoked potentials were recorded for click and speech. Participants with auditory dys-synchrony were fitted with bilateral multichannel wide dynamic range compression hearing aids. Aided and unaided speech identification scores for 40 words were obtained for each participant. Hierarchical cluster analysis using Ward's method clearly showed four subgroups of participants with auditory dys-synchrony based on the hearing aid benefit score (aided minus unaided speech identification score). The difference in the mean aided and unaided speech identification scores was significantly different in participants with auditory dys-synchrony. However, the mean unaided speech identification scores were not significantly different between the four subgroups. The N2 amplitude and P1 latency of the speech-evoked cortical potentials were significantly different between the four subgroups formed based on hearing aid benefit scores. The results indicated that subgroups of individuals with auditory dys-synchrony who benefit from hearing aids exist. Individuals who benefitted from hearing aids showed decreased N2 amplitudes compared with those who did not. N2 amplitude is associated with greater suppression of background noise while processing speech.
Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M
Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.
The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and nonlemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation - comparable to repetitive tonal stimulation - of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a "differential" gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning.
Tsukano, Hiroaki; Horie, Masao; Hishida, Ryuichi; Takahashi, Kuniyuki; Takebayashi, Hirohide; Shibuki, Katsuei
Optical imaging studies have recently revealed the presence of multiple auditory cortical regions in the mouse brain. We have previously demonstrated, using flavoprotein fluorescence imaging, at least six regions in the mouse auditory cortex, including the anterior auditory field (AAF), primary auditory cortex (AI), the secondary auditory field (AII), dorsoanterior field (DA), dorsomedial field (DM), and dorsoposterior field (DP). While multiple regions in the visual cortex and somatosensory cortex have been annotated and consolidated in recent brain atlases, the multiple auditory cortical regions have not yet been presented from a coronal view. In the current study, we obtained regional coordinates of the six auditory cortical regions of the C57BL/6 mouse brain and illustrated these regions on template coronal brain slices. These results should reinforce the existing mouse brain atlases and support future studies in the auditory cortex.
Tsukano, Hiroaki; Horie, Masao; Hishida, Ryuichi; Takahashi, Kuniyuki; Takebayashi, Hirohide; Shibuki, Katsuei
Optical imaging studies have recently revealed the presence of multiple auditory cortical regions in the mouse brain. We have previously demonstrated, using flavoprotein fluorescence imaging, at least six regions in the mouse auditory cortex, including the anterior auditory field (AAF), primary auditory cortex (AI), the secondary auditory field (AII), dorsoanterior field (DA), dorsomedial field (DM), and dorsoposterior field (DP). While multiple regions in the visual cortex and somatosensory cortex have been annotated and consolidated in recent brain atlases, the multiple auditory cortical regions have not yet been presented from a coronal view. In the current study, we obtained regional coordinates of the six auditory cortical regions of the C57BL/6 mouse brain and illustrated these regions on template coronal brain slices. These results should reinforce the existing mouse brain atlases and support future studies in the auditory cortex. PMID:26924462
Molloy, Anne T.; Jiradejvong, Patpong; Braun, Allen R.
Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H215O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study. PMID:19662456
Howarth, A; Shone, G R
There are a number of pathophysiological processes underlying age related changes in the auditory system. The effects of hearing loss can have consequences beyond the immediate loss of hearing, and may have profound effects on the functioning of the person. While a deficit in hearing can be corrected to some degree by a hearing aid, auditory rehabilitation requires much more than simply amplifying external sound. It is important that those dealing with elderly people are aware of all the issues involved in age related hearing loss. PMID:16517797
Willmore, Ben D B; Schoppe, Oliver; King, Andrew J; Schnupp, Jan W H; Harper, Nicol S
Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear-nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free
Kyuhou, Shin-ichi; Matsuzaki, Ryuichi; Gemba, Hisae
Auditory evoked potentials (AEPs) were recorded in the motor cortices (MC) with chronically implanted electrodes in the rat. Some of the AEPs in the MC, namely negative potentials on the surface and positive ones at a depth of 2 mm at latencies of about 50-150 ms, were abolished by limited bilateral lesions of the anterior perirhinal cortex (PERa) which was responsive to auditory stimulus, indicating that the AEPs in the MC were at least partially relayed in the PERa. The auditory response in the MC was prominently enhanced when water was supplied or the medial forebrain bundle was stimulated after auditory stimulus. These results indicate that the MC receives the reward associated auditory information from the PERa.
Reznik, Daniel; Ossmy, Ori; Mukamel, Roy
Accumulating evidence demonstrates that responses in auditory cortex to auditory consequences of self-generated actions are modified relative to the responses evoked by identical sounds generated by an external source. Such modifications have been suggested to occur through a corollary discharge sent from the motor system, although the exact neuroanatomical origin is unknown. Furthermore, since tactile input has also been shown to modify responses in auditory cortex, it is not even clear whether the source of such modifications is motor output or somatosensory feedback. We recorded functional magnetic resonance imaging (fMRI) data from healthy human subjects (n = 11) while manipulating the rate at which they performed sound-producing actions with their right hand. In addition, we manipulated the amount of tactile feedback to examine the relative roles of motor and somatosensory cortices in modifying evoked activity in auditory cortex (superior temporal gyrus). We found an enhanced fMRI signal in left auditory cortex during perception of self-generated sounds relative to passive listening to identical sounds. Moreover, the signal difference between active and passive conditions in left auditory cortex covaried with the rate of sound-producing actions and was invariant to the amount of tactile feedback. Together with functional connectivity analysis, our results suggest motor output from supplementary motor area and left primary motor cortex as the source of signal modification in auditory cortex during perception of self-generated sounds. Motor signals from these regions could represent a predictive signal of the expected auditory consequences of the performed action.
The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…
The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…
Mendes, Raquel Metzker; Barbosa, Rafael Inácio; Salmón, Carlos Ernesto Garrido; Rondinoni, Carlo; Escorsi-Rosset, Sara; Delsim, Juliana Carla; Barbieri, Cláudio Henrique; Mazzer, Nilton
The purpose of this study was to shed light on cortical audiotactile integration and sensory substitution mechanisms, thought to serve as a basis for the use of a sensor glove in the preservation of the cortical map of the hand after peripheral nerve injuries. Fourteen subjects were selected and randomly assigned either to a training group, trained to replace touch for hearing with the use of a sensor glove, or to a control group, untrained. Training group volunteers had to identify textures just by the sound. In an fMRI experiment, all subjects received three types of stimuli: tactile only, combined audiotactile stimulation, and auditory only. Results indicate that, for trained subjects, a coupling between auditory and somatosensory cortical areas is established through associative areas. Differences in signal correlation between groups point to a pairing mechanism, which, at first, connects functionally the primary auditory and sensory areas (trained subjects). Later, this connection seems to be mediated by associative areas. The training with the sensor glove influences cortical audiotactile integration mechanisms, determining BOLD signal changes in the somatosensory area during auditory stimulation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Giordano, Bruno L; McAdams, Stephen; Zatorre, Robert J; Kriegeskorte, Nikolaus; Belin, Pascal
The human brain is thought to process auditory objects along a hierarchical temporal "what" stream that progressively abstracts object information from the low-level structure (e.g., loudness) as processing proceeds along the middle-to-anterior direction. Empirical demonstrations of abstract object encoding, independent of low-level structure, have relied on speech stimuli, and non-speech studies of object-category encoding (e.g., human vocalizations) often lack a systematic assessment of low-level information (e.g., vocalizations are highly harmonic). It is currently unknown whether abstract encoding constitutes a general functional principle that operates for auditory objects other than speech. We combined multivariate analyses of functional imaging data with an accurate analysis of the low-level acoustical information to examine the abstract encoding of non-speech categories. We observed abstract encoding of the living and human-action sound categories in the fine-grained spatial distribution of activity in the middle-to-posterior temporal cortex (e.g., planum temporale). Abstract encoding of auditory objects appears to extend to non-speech biological sounds and to operate in regions other than the anterior temporal lobe. Neural processes for the abstract encoding of auditory objects might have facilitated the emergence of speech categories in our ancestors.
Dimitrijevic, Andrew; Starr, Arnold; Bhatt, Shrutee; Michalewski, Henry J.; Zeng, Fan-Gang; Pratt, Hillel
Objectives Auditory cortical N100s were examined in ten auditory neuropathy (AN) subjects as objective measures of impaired hearing. Methods Latency and amplitudes of N100 in AN to increases of frequency (4–50%) or intensity (4–8 dB) of low (250 Hz) or high (4000 Hz) frequency tones were compared with results from normal-hearing controls. The sites of auditory nerve dysfunction were pre-synaptic (n=3) due to otoferlin mutations causing temperature sensitive deafness, post-synaptic (n=4) affecting other cranial and/or peripheral neuropathies, and undefined (n=3). Results AN consistently had N100s only to the largest changes of frequency or intensity whereas controls consistently had N100s to all but the smallest frequency and intensity changes. N100 latency in AN was significantly delayed compared to controls, more so for 250 than for 4000 Hz and more so for changes of intensity compared to frequency. N100 amplitudes to frequency change were significantly reduced in ANs compared to controls, except for pre-synaptic AN in whom amplitudes were greater than controls. N100 latency to frequency change of 250 but not of 4000 Hz was significantly related to speech perception scores. Conclusions As a group, AN subjects’ N100 potentials were abnormally delayed and smaller, particularly for low frequency. The extent of these abnormalities differed between pre- and post-synaptic forms of the disorder. Significance Abnormalities of auditory cortical N100 in AN reflect disorders of both temporal processing (low frequency) and neural adaptation (high frequency). Auditory N100 latency to the low frequency provides an objective measure of the degree of impaired speech perception in AN. PMID:20822952
Itoh, Kosuke; Nejime, Masafumi; Konoike, Naho; Nakada, Tsutomu; Nakamura, Katsuki
Scalp-recorded evoked potentials (EP) provide researchers and clinicians with irreplaceable means for recording stimulus-related neural activities in the human brain, due to its high temporal resolution, handiness, and, perhaps more importantly, non-invasiveness. This work recorded the scalp cortical auditory EP (CAEP) in unanesthetized monkeys by using methods that are essentially identical to those applied to humans. Young adult rhesus monkeys (Macaca mulatta, 5-7 years old) were seated in a monkey chair, and their head movements were partially restricted by polystyrene blocks and tension poles placed around their head. Individual electrodes were fixated on their scalp using collodion according to the 10-20 system. Pure tone stimuli were presented while electroencephalograms were recorded from up to nineteen channels, including an electrooculogram channel. In all monkeys (n = 3), the recorded CAEP comprised a series of positive and negative deflections, labeled here as macaque P1 (mP1), macaque N1 (mN1), macaque P2 (mP2), and macaque N2 (mN2), and these transient responses to sound onset were followed by a sustained potential that continued for the duration of the sound, labeled the macaque sustained potential (mSP). mP1, mN2 and mSP were the prominent responses, and they had maximal amplitudes over frontal/central midline electrode sites, consistent with generators in auditory cortices. The study represents the first noninvasive scalp recording of CAEP in alert rhesus monkeys, to our knowledge. Copyright © 2015 Elsevier B.V. All rights reserved.
Radwan, Heba Mohammed; El-Gharib, Amani Mohamed; Erfan, Adel Ali; Emara, Afaf Ahmad
Delay in ABR and CAEPs wave latencies in children with type 1DM indicates that there is abnormality in the neural conduction in DM patients. The duration of DM has greater effect on auditory function than the control of DM. Diabetes mellitus (DM) is a common endocrine and metabolic disorder. Evoked potentials offer the possibility to perform a functional evaluation of neural pathways in the central nervous system. To investigate the effect of type 1 diabetes mellitus (T1DM) on auditory brain stem response (ABR) and cortical evoked potentials (CAEPs). This study included two groups: a control group (GI), which consisted of 20 healthy children with normal peripheral hearing, and a study group (GII), which consisted of 30 children with type I DM. Basic audiological evaluation, ABR, and CAEPs were done in both groups. Delayed absolute latencies of ABR and CAEPs waves were found. Amplitudes showed no significant difference between both groups. Positive correlation was found between ABR wave latencies and duration of DM. No correlation was found between ABR, CAEPs, and glycated hemoglobin.
Martin, B A; Boothroyd, A
The acoustic change complex (ACC) is a scalp-recorded negative-positive voltage swing elicited by a change during an otherwise steady-state sound. The ACC was obtained from eight adults in response to changes of amplitude and/or spectral envelope at the temporal center of a three-formant synthetic vowel lasting 800 ms. In the absence of spectral change, the group mean waveforms showed a clear ACC to amplitude increments of 2 dB or more and decrements of 3 dB or more. In the presence of a change of second formant frequency (from perceived /u/ to perceived /i/), amplitude increments increased the magnitude of the ACC but amplitude decrements had little or no effect. The fact that the just detectable amplitude change is close to the psychoacoustic limits of the auditory system augurs well for the clinical application of the ACC. The failure to find a condition under which the spectrally elicited ACC is diminished by a small change of amplitude supports the conclusion that the observed ACC to a change of spectral envelope reflects some aspect of cortical frequency coding. Taken together, these findings support the potential value of the ACC as an objective index of auditory discrimination capacity.
Billings, Curtis J; Tremblay, Kelly L; Souza, Pamela E; Binns, Malcolm A
Hearing aid amplification can be used as a model for studying the effects of auditory stimulation on the central auditory system (CAS). We examined the effects of stimulus presentation level on the physiological detection of sound in unaided and aided conditions. P1, N1, P2, and N2 cortical evoked potentials were recorded in sound field from 13 normal-hearing young adults in response to a 1000-Hz tone presented at seven stimulus intensity levels. As expected, peak amplitudes increased and peak latencies decreased with increasing intensity for unaided and aided conditions. However, there was no significant effect of amplification on latencies or amplitudes. Taken together, these results demonstrate that 20 dB of hearing aid gain affects neural responses differently than 20 dB of stimulus intensity change. Hearing aid signal processing is discussed as a possible contributor to these results. This study demonstrates (1) the importance of controlling for stimulus intensity when evoking responses in aided conditions, and (2) the need to better understand the interaction between the hearing aid and the CAS.
Niwa, Mamiko; Johnson, Jeffrey S; O'Connor, Kevin N; Sutter, Mitchell L
The effect of attention on single neuron responses in the auditory system is unresolved. We found that when monkeys discriminated temporally amplitude modulated (AM) from unmodulated sounds, primary auditory cortical (A1) neurons better discriminated those sounds than when the monkeys were not discriminating them. This was observed for both average firing rate and vector strength (VS), a measure of how well neurons temporally follow the stimulus' temporal modulation. When data were separated by nonsynchronized and synchronized responses, the firing rate of nonsynchronized responses best distinguished AM- noise from unmodulated noise, followed by VS for synchronized responses, with firing rate for synchronized neurons providing the poorest AM discrimination. Firing rate-based AM discrimination for synchronized neurons, however, improved most with task engagement, showing that the least sensitive code in the passive condition improves the most with task engagement. Rate coding improved due to larger increases in absolute firing rate at higher modulation depths than for lower depths and unmodulated sounds. Relative to spontaneous activity (which increased with engagement), the response to unmodulated sounds decreased substantially. The temporal coding improvement--responses more precisely temporally following a stimulus when animals were required to attend to it--expands the framework of possible mechanisms of attention to include increasing temporal precision of stimulus following. These findings provide a crucial step to understanding the coding of temporal modulation and support a model in which rate and temporal coding work in parallel, permitting a multiplexed code for temporal modulation, and for a complementary representation of rate and temporal coding.
de la Mothe, Lisa A.; Blumell, Suzanne; Kajikawa, Yoshinao; Hackett, Troy A.
The current working model of primate auditory cortex is constructed from a number of studies of both New and Old World monkeys. It includes three levels of processing. A primary level, the core region, is surrounded both medially and laterally by a secondary belt region. A third level of processing, the parabelt region, is located lateral to the belt. The marmoset monkey (Callithrix jacchus jacchus) has become an important model system to study auditory processing, but its anatomical organization has not been fully established. In previous studies, we focused on the architecture and connections of the core and medial belt areas (de la Mothe et al., 2006a,b). In the current study the corticocortical connections of the lateral belt and parabelt were examined in the marmoset. Tracers were injected into both rostral and caudal portions of the lateral belt and parabelt. Both regions revealed topographic connections along the rostrocaudal axis, where caudal areas of injection had stronger connections with caudal areas, and rostral areas of injection with rostral areas. The lateral belt had strong connections with the core, belt and parabelt, whereas the parabelt had strong connections with the belt but not the core. Label in the core from injections in the parabelt was significantly reduced or absent, consistent with the idea that the parabelt relies mainly on the belt for its cortical input. In addition, the present and previous studies indicate hierarchical principles of anatomical organization in the marmoset that are consistent with those observed in other primates. PMID:22461313
Kayser, Stephanie J.; Ince, Robin A.A.; Gross, Joachim
The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our
Kayser, Stephanie J; Ince, Robin A A; Gross, Joachim; Kayser, Christoph
The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that
Demopoulos, Carly; Yu, Nina; Tripp, Jennifer; Mota, Nayara; Brandes-Aitken, Anne N.; Desai, Shivani S.; Hill, Susanna S.; Antovich, Ashley D.; Harris, Julia; Honma, Susanne; Mizuiri, Danielle; Nagarajan, Srikantan S.; Marco, Elysa J.
This study compared magnetoencephalographic (MEG) imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18), those with sensory processing dysfunction (SPD; N = 13) who do not meet ASD criteria, and typically developing control (TDC; N = 19) participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain. PMID:28603492
De Martino, Federico; Moerel, Michelle; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.
Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647
Shahin, Antoine; Roberts, Larry E; Trainor, Laurel J
Auditory evoked potentials (AEPs) express the development of mature synaptic connections in the upper neocortical laminae known to occur between 4 and 15 years of age. AEPs evoked by piano, violin, and pure tones were measured twice in a group of 4- to 5-year-old children enrolled in Suzuki music lessons and in non-musician controls. P1 was larger in the Suzuki pupils for all tones whereas P2 was enhanced specifically for the instrument of practice (piano or violin). AEPs observed for the instrument of practice were comparable to those of non-musician children about 3 years older in chronological age. The findings set into relief a general process by which the neocortical synaptic matrix is shaped by an accumulation of specific auditory experiences.
Roth, Thomas Nicolas
Presbycusis or age-related hearing loss (ARHL) affects most elderly people. It is characterized by reduced hearing thresholds and speech understanding with the well-known negative consequences for communication and quality of social life. The hearing loss is connected to age-related histologic changes, as described and classified by Schuknecht. Aging itself is a multifactorial, genetically driven process that is influenced by oxidative stress that gradually leads to reduced endocochlear potential and cell loss of key players in sound transmission and supporting structures. Oxidative stress is caused by damaging factors like noise, infection, and other systemic factors. All reparative mechanisms in acute and chronic cochlear damage attempt to reduce oxidative stress and to balance inner-ear homeostasis. Accurate clinical assessment of ARHL starts with the differentiation between peripheral and central components. Treatment of the peripheral hearing loss often involves hearing aids, whereas auditory and psychologic training seems to be important in central auditory disturbance.
Dale, Corby L.; Brown, Ethan G.; Fisher, Melissa; Herman, Alexander B.; Dowling, Anne F.; Hinkley, Leighton B.; Subramaniam, Karuna; Nagarajan, Srikantan S.; Vinogradov, Sophia
Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668
Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning
Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy
Kishan, Amar U.; Lee, Charles C.; Winer, Jeffery A.
Branched axons (BAs) projecting to different areas of the brain can create multiple feature-specific maps or synchronize processing in remote targets. We examined the organization of BAs in the cat auditory forebrain using two sensitive retrograde tracers. In one set of experiments (n=4), the tracers were injected into different frequency-matched loci in the primary auditory area (AI) and the anterior auditory field (AAF). In the other set (n=4), we injected primary, non-primary, or limbic cortical areas. After mapped injections, percentages of double labeled cells (PDLs) in the medial geniculate body (MGB) ranged from 1.4% (ventral division) to 2.8% (rostral pole). In both ipsilateral and contralateral areas AI and AAF, the average PDLs were <1%. In the unmapped cases, the MGB PDLs ranged from 0.6% (ventral division) after insular cortex injections to 6.7% (dorsal division) after temporal cortex injections. Cortical PDLs ranged from 0.1% (ipsilateral AI injections) to 3.7% AII (contralateral AII injections). PDLs within the smaller (minority) projection population were significantly higher than those in the overall population. About 2% of auditory forebrain projection cells have BAs and such cells are organized differently than those in the subcortical auditory system, where BAs can be far more numerous. Forebrain branched projections follow different organizational rules than their unbranched counterparts. Finally, the relatively larger proportion of visual and somatic sensory forebrain BAs suggests modality specific rules for BA organization. PMID:18294776
Larson, Eric; Lee, Adrian K C
Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies.
Keitel, Anne; Ince, Robin A A; Gross, Joachim; Kayser, Christoph
The timing of slow auditory cortical activity aligns to the rhythmic fluctuations in speech. This entrainment is considered to be a marker of the prosodic and syllabic encoding of speech, and has been shown to correlate with intelligibility. Yet, whether and how auditory cortical entrainment is influenced by the activity in other speech-relevant areas remains unknown. Using source-localized MEG data, we quantified the dependency of auditory entrainment on the state of oscillatory activity in fronto-parietal regions. We found that delta band entrainment interacted with the oscillatory activity in three distinct networks. First, entrainment in the left anterior superior temporal gyrus (STG) was modulated by beta power in orbitofrontal areas, possibly reflecting predictive top-down modulations of auditory encoding. Second, entrainment in the left Heschl's Gyrus and anterior STG was dependent on alpha power in central areas, in line with the importance of motor structures for phonological analysis. And third, entrainment in the right posterior STG modulated theta power in parietal areas, consistent with the engagement of semantic memory. These results illustrate the topographical network interactions of auditory delta entrainment and reveal distinct cross-frequency mechanisms by which entrainment can interact with different cognitive processes underlying speech perception. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Farley, Brandon J.
How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts. PMID:26269558
Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359
Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.
Robinson, Benjamin L.; Harper, Nicol S.; McAlpine, David
Neural adaptation is central to sensation. Neurons in auditory midbrain, for example, rapidly adapt their firing rates to enhance coding precision of common sound intensities. However, it remains unknown whether this adaptation is fixed, or dynamic and dependent on experience. Here, using guinea pigs as animal models, we report that adaptation accelerates when an environment is re-encountered—in response to a sound environment that repeatedly switches between quiet and loud, midbrain neurons accrue experience to find an efficient code more rapidly. This phenomenon, which we term meta-adaptation, suggests a top–down influence on the midbrain. To test this, we inactivate auditory cortex and find acceleration of adaptation with experience is attenuated, indicating a role for cortex—and its little-understood projections to the midbrain—in modulating meta-adaptation. Given the prevalence of adaptation across organisms and senses, meta-adaptation might be similarly common, with extensive implications for understanding how neurons encode the rapidly changing environments of the real world. PMID:27883088
Geissler, Diana B.; Schmidt, H. Sabine; Ehret, Günter
Activation of the auditory cortex (AC) by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF), the ultrasonic field (UF), the secondary field (AII), and the dorsoposterior field (DP) suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers) and brains which acquired knowledge via implicit learning (naïve females). In this way, auditory cortical activation discriminates between instinctive (mothers) and learned (naïve females) cognition. PMID:27013959
Geissler, Diana B; Schmidt, H Sabine; Ehret, Günter
Activation of the auditory cortex (AC) by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF), the ultrasonic field (UF), the secondary field (AII), and the dorsoposterior field (DP) suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers) and brains which acquired knowledge via implicit learning (naïve females). In this way, auditory cortical activation discriminates between instinctive (mothers) and learned (naïve females) cognition.
Alam, Iftekhar; Ghatol, Ashok
Blindness is a sensory disability which is difficult to treat but can to some extent be helped by artificial aids. The paper describes the design aspects of a high resolution auditory perception system, which is designed on the principle of air sonar with binaural perception. This system is a vision substitution aid for enabling blind persons. The blind person wears ultrasonic eyeglasses which has ultrasonic sensor array embedded on it. The system has been designed to operate in multiresolution modes. The ultrasonic sound from the transmitter array is reflected back by the objects, falling in the beam of the array and is received. The received signal is converted to a sound signal, which is presented stereophonically for auditory perception. A detailed study has been done as the background work required for the system implementation; the appropriate range analysis procedure, analysis of space-time signals, the acoustic sensors study, amplification methods and study of the removal of noise using filters. Finally the system implementation including both the hardware and the software part of it has been described. Experimental results on actual blind subjects and inferences obtained during the study have also been included.
Hamberger, Marla J.; Seidel, William T.
Naming is generally considered a left hemisphere function without precise localization. However, recent cortical stimulation studies demonstrate a modality-related anatomical dissociation, in that anterior temporal stimulation disrupts auditory description naming (“auditory naming”), but not visual object naming (“visual naming”), whereas posterior temporal stimulation disrupts naming on both tasks. We hypothesized that patients with anterior temporal abnormalities would exhibit impaired auditory naming, yet normal range visual naming, whereas posterior temporal patients would exhibit impaired performance on both tasks. Thirty-four patients with documented anterior temporal abnormalities and 14 patients with documented posterior temporal abnormalities received both naming tests. As hypothesized, patients with anterior temporal abnormalities demonstrated impaired auditory naming, yet normal range visual naming performance. Patients with posterior temporal abnormalities were impaired in visual naming, however, auditory naming scores were intact. Although these group patterns were statistically significant, on an individual basis, auditory-visual naming asymmetries better predicted whether individual patients had anterior or posterior temporal abnormalities. These behavioral findings are generally consistent with stimulation results, suggesting that modality specificity is inherent in the organization of language, with predictable neuroanatomical correlates. Results also carry clinical implications regarding localizing dysfunction, identifying and characterizing naming deficits, and potentially, in treating neurologically-based language disorders. PMID:19573271
Nieto-Diego, Javier; Malmierca, Manuel S.
Stimulus-specific adaptation (SSA) in single neurons of the auditory cortex was suggested to be a potential neural correlate of the mismatch negativity (MMN), a widely studied component of the auditory event-related potentials (ERP) that is elicited by changes in the auditory environment. However, several aspects on this SSA/MMN relation remain unresolved. SSA occurs in the primary auditory cortex (A1), but detailed studies on SSA beyond A1 are lacking. To study the topographic organization of SSA, we mapped the whole rat auditory cortex with multiunit activity recordings, using an oddball paradigm. We demonstrate that SSA occurs outside A1 and differs between primary and nonprimary cortical fields. In particular, SSA is much stronger and develops faster in the nonprimary than in the primary fields, paralleling the organization of subcortical SSA. Importantly, strong SSA is present in the nonprimary auditory cortex within the latency range of the MMN in the rat and correlates with an MMN-like difference wave in the simultaneously recorded local field potentials (LFP). We present new and strong evidence linking SSA at the cellular level to the MMN, a central tool in cognitive and clinical neuroscience. PMID:26950883
NR2B subunit-dependent long-term potentiation enhancement in the rat cortical auditory system in vivo following masking of patterned auditory input by white noise exposure during early postnatal life.
Hogsden, Jennifer L; Dringenberg, Hans C
The composition of N-methyl-D-aspartate (NMDA) receptor subunits influences the degree of synaptic plasticity expressed during development and into adulthood. Here, we show that theta-burst stimulation of the medial geniculate nucleus reliably induced NMDA receptor-dependent long-term potentiation (LTP) of field postsynaptic potentials recorded in the primary auditory cortex (A1) of urethane-anesthetized rats. Furthermore, substantially greater levels of LTP were elicited in juvenile animals (30-37 days old; approximately 55% maximal potentiation) than in adult animals (approximately 30% potentiation). Masking patterned sound via continuous white noise exposure during early postnatal life (from postnatal day 5 to postnatal day 50-60) resulted in enhanced, juvenile-like levels of LTP (approximately 70% maximal potentiation) relative to age-matched controls reared in unaltered acoustic environments (approximately 30%). Rats reared in white noise and then placed in unaltered acoustic environments for 40-50 days showed levels of LTP comparable to those of adult controls, indicating that white noise rearing results in a form of developmental arrest that can be overcome by subsequent patterned sound exposure. We explored the mechanisms mediating white noise-induced plasticity enhancements by local NR2B subunit antagonist application in A1. NR2B subunit antagonists (Ro 25-6981 or ifenprodil) completely reversed white noise-induced LTP enhancement at concentrations that did not affect LTP in adult or age-matched controls. We conclude that white noise exposure during early postnatal life results in the maintenance of juvenile-like, higher levels of plasticity in A1, an effect that appears to be critically dependent on NR2B subunit activation.
Yurgil, Kate A.; Golob, Edward J.
This study determined whether auditory cortical responses associated with mechanisms of attention vary with individual differences in working memory capacity (WMC) and perceptual load. The operation span test defined subjects with low vs. high WMC, who then discriminated target/nontarget tones while EEG was recorded. Infrequent white noise distracters were presented at midline or ±90° locations, and perceptual load was manipulated by varying nontarget frequency. Amplitude of the N100 to distracters was negatively correlated with WMC. Relative to targets, only high WMC subjects showed attenuated N100 amplitudes to nontargets. In the higher WMC group, increased perceptual load was associated with decreased P3a amplitudes to distracters and longer-lasting negative slow wave to nontargets. Results show that auditory cortical processing is associated with multiple facets of attention control related to WMC and possibly higher-level cognition. PMID:24016201
Pérez-González, David; Malmierca, Manuel S
The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G
Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.
Pallas, Sarah L.
Loss of sensory input from peripheral organ damage, sensory deprivation, or brain damage can result in adaptive or maladaptive changes in sensory cortex. In previous research, we found that auditory cortical tuning and tonotopy were impaired by cross-modal invasion of visual inputs. Sensory deprivation is typically associated with a loss of inhibition. To determine whether inhibitory plasticity is responsible for this process, we measured pre- and postsynaptic changes in inhibitory connectivity in ferret auditory cortex (AC) after cross-modal plasticity. We found that blocking GABAA receptors increased responsiveness and broadened sound frequency tuning in the cross-modal group more than in the normal group. Furthermore, expression levels of glutamic acid decarboxylase (GAD) protein were increased in the cross-modal group. We also found that blocking inhibition unmasked visual responses of some auditory neurons in cross-modal AC. Overall, our data suggest a role for increased inhibition in reducing the effectiveness of the abnormal visual inputs and argue that decreased inhibition is not responsible for compromised auditory cortical function after cross-modal invasion. Our findings imply that inhibitory plasticity may play a role in reorganizing sensory cortex after cross-modal invasion, suggesting clinical strategies for recovery after brain injury or sensory deprivation. PMID:24288625
Mandelblat-Cerf, Yael; Las, Liora; Denisenko, Natalia; Fee, Michale S
Many learned motor behaviors are acquired by comparing ongoing behavior with an internal representation of correct performance, rather than using an explicit external reward. For example, juvenile songbirds learn to sing by comparing their song with the memory of a tutor song. At present, the brain regions subserving song evaluation are not known. In this study, we report several findings suggesting that song evaluation involves an avian 'cortical' area previously shown to project to the dopaminergic midbrain and other downstream targets. We find that this ventral portion of the intermediate arcopallium (AIV) receives inputs from auditory cortical areas, and that lesions of AIV result in significant deficits in vocal learning. Additionally, AIV neurons exhibit fast responses to disruptive auditory feedback presented during singing, but not during nonsinging periods. Our findings suggest that auditory cortical areas may guide learning by transmitting song evaluation signals to the dopaminergic midbrain and/or other subcortical targets. DOI: http://dx.doi.org/10.7554/eLife.02152.001 PMID:24935934
Baba, Hironori; Tsukano, Hiroaki; Hishida, Ryuichi; Takahashi, Kuniyuki; Horii, Arata; Takahashi, Sugata; Shibuki, Katsuei
Although temporal information processing is important in auditory perception, the mechanisms for coding tonal offsets are unknown. We investigated cortical responses elicited at the offset of tonal stimuli using flavoprotein fluorescence imaging in mice. Off-responses were clearly observed at the offset of tonal stimuli lasting for 7 s, but not after stimuli lasting for 1 s. Off-responses to the short stimuli appeared in a similar cortical region, when conditioning tonal stimuli lasting for 5–20 s preceded the stimuli. MK-801, an inhibitor of NMDA receptors, suppressed the two types of off-responses, suggesting that disinhibition produced by NMDA receptor-dependent synaptic depression might be involved in the off-responses. The peak off-responses were localized in a small region adjacent to the primary auditory cortex, and no frequency-dependent shift of the response peaks was found. Frequency matching of preceding tonal stimuli with short test stimuli was not required for inducing off-responses to short stimuli. Two-photon calcium imaging demonstrated significantly larger neuronal off-responses to stimuli lasting for 7 s in this field, compared with off-responses to stimuli lasting for 1 s. The present results indicate the presence of an auditory cortical field responding to long-lasting tonal offsets, possibly for temporal information processing. PMID:27687766
Rummell, Brian P; Klee, Jan L; Sigurdsson, Torfi
Many of the sounds that we perceive are caused by our own actions, for example when speaking or moving, and must be distinguished from sounds caused by external events. Studies using macroscopic measurements of brain activity in human subjects have consistently shown that responses to self-generated sounds are attenuated in amplitude. However, the underlying manifestation of this phenomenon at the cellular level is not well understood. To address this, we recorded the activity of neurons in the auditory cortex of mice in response to sounds generated by their own behavior. We found that the responses of auditory cortical neurons to these self-generated sounds were consistently attenuated, compared with the same sounds generated independently of the animals' behavior. This effect was observed in both putative pyramidal neurons and in interneurons and was stronger in lower layers of auditory cortex. Downstream of the auditory cortex, we found that responses of hippocampal neurons to self-generated sounds were almost entirely suppressed. Responses to self-generated optogenetic stimulation of auditory thalamocortical terminals were also attenuated, suggesting a cortical contribution to this effect. Further analyses revealed that the attenuation of self-generated sounds was not simply due to the nonspecific effects of movement or behavioral state on auditory responsiveness. However, the strength of attenuation depended on the degree to which self-generated sounds were expected to occur, in a cell-type-specific manner. Together, these results reveal the cellular basis underlying attenuated responses to self-generated sounds and suggest that predictive processes contribute to this effect.
de la Mothe, Lisa A; Blumell, Suzanne; Kajikawa, Yoshinao; Hackett, Troy A
The current working model of primate auditory cortex is constructed from a number of studies of both new and old world monkeys. It includes three levels of processing. A primary level, the core region, is surrounded both medially and laterally by a secondary belt region. A third level of processing, the parabelt region, is located lateral to the belt. The marmoset monkey (Callithrix jacchus jacchus) has become an important model system to study auditory processing, but its anatomical organization has not been fully established. In previous studies, we focused on the architecture and connections of the core and medial belt areas (de la Mothe et al., 2006a, J Comp Neurol 496:27-71; de la Mothe et al., 2006b, J Comp Neurol 496:72-96). In this study, the corticocortical connections of the lateral belt and parabelt were examined in the marmoset. Tracers were injected into both rostral and caudal portions of the lateral belt and parabelt. Both regions revealed topographic connections along the rostrocaudal axis, where caudal areas of injection had stronger connections with caudal areas, and rostral areas of injection with rostral areas. The lateral belt had strong connections with the core, belt, and parabelt, whereas the parabelt had strong connections with the belt but not the core. Label in the core from injections in the parabelt was significantly reduced or absent, consistent with the idea that the parabelt relies mainly on the belt for its cortical input. In addition, the present and previous studies indicate hierarchical principles of anatomical organization in the marmoset that are consistent with those observed in other primates.
Geiser, Eveline; Notter, Michael; Gabrieli, John D E
The temporal context of an acoustic signal can greatly influence its perception. The present study investigated the neural correlates underlying perceptual facilitation by regular temporal contexts in humans. Participants listened to temporally regular (periodic) or temporally irregular (nonperiodic) sequences of tones while performing an intensity discrimination task. Participants performed significantly better on intensity discrimination during periodic than nonperiodic tone sequences. There was greater activation in the putamen for periodic than nonperiodic sequences. Conversely, there was greater activation in bilateral primary and secondary auditory cortices (planum polare and planum temporale) for nonperiodic than periodic sequences. Across individuals, greater putamen activation correlated with lesser auditory cortical activation in both right and left hemispheres. These findings suggest that temporal regularity is detected in the putamen, and that such detection facilitates temporal-lobe cortical processing associated with superior auditory perception. Thus, this study reveals a corticostriatal system associated with contextual facilitation for auditory perception through temporal regularity processing.
Schreiner, Christoph E.
In primary auditory cortex (AI), broadly correlated firing has been commonly observed. In contrast, sharply synchronous firing has rarely been seen and has not been well characterized. Therefore, we examined cat AI local subnetworks using cross-correlation and spectrotemporal receptive field (STRF) analysis for neighboring neurons. Sharply synchronous firing responses were observed predominantly for neurons separated by <150 μm. This high synchrony was independent of layers and was present between all distinguishable cell types. The sharpest synchrony was seen in supragranular layers and between regular spiking units. Synchronous spikes conveyed more stimulus information than nonsynchronous spikes. Neighboring neurons in all layers had similar best frequencies and similar STRFs, with the highest similarity in supragranular and granular layers. Spectral tuning selectivity and latency were only moderately conserved in these local, high-synchrony AI subnetworks. Overall, sharp synchrony is a specific characteristic of fine-scale networks within the AI and local functional processing is well ordered and similar, but not identical, for neighboring neurons of all cell types. PMID:24259573
Luo, Huan; Poeppel, David
Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (∼20–80 ms duration information) and the theta band (∼150–300 ms), corresponding to segmental and diphonic versus syllabic modulation rates, respectively. It has been hypothesized that auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that such non-speech stimuli with temporal structure matching speech-relevant scales (∼25 and ∼200 ms) elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands). In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST). The data argue for a mesoscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales. PMID:22666214
Tomlin, Dani; Rance, Gary
Neurodevelopmental delay has been proposed as the underlying cause of the majority of cases of auditory processing disorder (APD). The current study employs the cortical auditory evoked potential (CAEP) to assess if maturational differences of the central auditory nervous system (CANS) can be identified between children who do and do not meet the diagnostic criterion for APD. The P1-N1 complex of the CAEP has previously been used for tracking development of the CANS in children with hearing impairment. Twenty-seven children (7 to 12 years old) who failed an APD behavioral test battery were age-matched (within 3 months) to children who had passed the same battery. CAEP responses to 500-Hz tone burst stimuli were recorded and analyzed for latency and amplitude measures. The P1-N1 complex of the CAEP, which has previously been used for tracking development of the CANS in children with hearing impairment, showed significant group differences. The children diagnosed with APD showed significantly increased latency (∼10 milliseconds) and significantly reduced amplitude (∼10 μV) of the early components of the CAEP compared with children with normal auditory processing. No significant differences were seen in the later P2 wave. The normal developmental course is for a decrease in latency and increase in amplitude as a function of age. The results of this study are, therefore, consistent with an immaturity of the CANS as an underlying cause of APD in children. PMID:27587924
The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation -- comparable to repetitive tonal stimulation -- of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a “differential” gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning. PMID:22155273
Ortiz, T; Pérez-Serrano, J M; Coullaut, J; Fudio, S; Coullaut, J; Criado, J
Event related Potentials, which seem to be an objective parameter reflecting cognitive functions, have been examined in depression. To evaluate the influence of visual and auditory stimuli on the P300 latency we studied 42 patients with major depression and 21 normal subjects. The experimental tasks applied were first a series of 300 auditory stimuli [255 (85%) were tones of 1000 Hz, and considered the frequent stimulus, whereas 45 (15%) were tones of 2000 Hz and referred to as the rare stimulus and second a series of 300 visual stimuli 255 (85%) were black circles on a white background, and considered the frequent stimulus, 9 cm diameter, 200 ms duration whereas 45 (15%) were back squares on a white background and referred to as the rare stimulus, 9 cm diameter, 200 ms duration] in the center of a computer screen. The results shown an increase of P300 latency in depressive patients during auditory and visual tasks. Non differences were found in reaction time to visual or auditory stimuli. These results are consistent with an impairment in brain function in depressive patients that is associated with cortical hypoactivity and deficits in perceptive, auditory or visual, functions.
Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M
Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human
Threlkeld, Steven W.; Hill, Courtney A.; Rosen, Glenn D.; Fitch, R. Holly
Auditory temporal processing deficits have been suggested to play a causal role in language learning impairments, and evidence of cortical developmental anomalies (microgyria (MG), ectopia) has been reported for language-impaired populations. Rodent models have linked these features, by showing deficits in auditory temporal discrimination for rats with neuronal migration anomalies (MG, ectopia). Since evidence from human studies suggests that training with both speech and non-speech acoustic stimuli may improve language performance in developmentally language-disabled populations, we were interested in whether/how maturation and early experience might influence auditory processing deficits seen in male rats with induced focal cortical MG. Results showed that for both simple (Normal single tone), as well as increasingly complex auditory discrimination tasks (Silent gap in white noise and FM sweep), prior experience significantly improved acoustic discrimination performance -- in fact, beyond improvements seen with maturation only. Further, we replicated evidence that young adult rats with MG were significantly impaired at discriminating FM sweeps compared to shams. However, these MG effects were no longer seen when experienced subjects were retested in adulthood (even though deficits in short duration FM sweep detection were seen for adult MG rats with no early experience). Thus while some improvements in auditory processing were seen with normal maturation, the effects of early experience were even more profound, in fact resulting in amelioration of MG effects seen at earlier ages. These findings support the clinical view that early training intervention with appropriate acoustic stimuli could similarly ameliorate long-term processing impairments seen in some language-impaired children. PMID:19460626
Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306
Ohl, Frank W
Rhythmic activity appears in the auditory cortex in both microscopic and macroscopic observables and is modulated by both bottom-up and top-down processes. How this activity serves both types of processes is largely unknown. Here we review studies that have recently improved our understanding of potential functional roles of large-scale global dynamic activity patterns in auditory cortex. The experimental paradigm of auditory category learning allowed critical testing of the hypothesis that global auditory cortical activity states are associated with endogenous cognitive states mediating the meaning associated with an acoustic stimulus rather than with activity states that merely represent the stimulus for further processing.
Atencio, Craig A.; Schreiner, Christoph E.
Excitatory pyramidal neurons and inhibitory interneurons constitute the main elements of cortical circuitry and have distinctive morphologic and electrophysiological properties. Here, we differentiate them by analyzing the time course of their action potentials (APs) and characterizing their receptive field properties in auditory cortex. Pyramidal neurons have longer APs and discharge as Regular-Spiking Units (RSUs), while basket and chandelier cells, which are inhibitory interneurons, have shorter APs and are Fast-Spiking Units (FSUs). To compare these neuronal classes we stimulated cat primary auditory cortex neurons with a dynamic moving ripple stimulus and constructed single-unit spectrotemporal receptive fields (STRFs) and their associated nonlinearities. FSUs had shorter latencies, broader spectral tuning, greater stimulus specificity, and higher temporal precision than RSUs. The STRF structure of FSUs was more separable, suggesting more independence between spectral and temporal processing regimes. The nonlinearities associated with the two cell classes was indicative of higher feature selectivity for FSUs. These global functional differences between RSUs and FSUs suggest fundamental distinctions between putative excitatory and inhibitory neurons that shape auditory cortical processing. PMID:18400888
Samson, F; Zeffiro, T A; Doyon, J; Benali, H; Mottron, L
A continuum of phenotypes makes up the autism spectrum (AS). In particular, individuals show large differences in language acquisition, ranging from precocious speech to severe speech onset delay. However, the neurological origin of this heterogeneity remains unknown. Here, we sought to determine whether AS individuals differing in speech acquisition show different cortical responses to auditory stimulation and morphometric brain differences. Whole-brain activity following exposure to non-social sounds was investigated. Individuals in the AS were classified according to the presence or absence of Speech Onset Delay (AS-SOD and AS-NoSOD, respectively) and were compared with IQ-matched typically developing individuals (TYP). AS-NoSOD participants displayed greater task-related activity than TYP in the inferior frontal gyrus and peri-auditory middle and superior temporal gyri, which are associated with language processing. Conversely, the AS-SOD group only showed enhanced activity in the vicinity of the auditory cortex. We detected no differences in brain structure between groups. This is the first study to demonstrate the existence of differences in functional brain activity between AS individuals divided according to their pattern of speech development. These findings support the Trigger-threshold-target model and indicate that the occurrence of speech onset delay in AS individuals depends on the location of cortical functional reallocation, which favors perception in AS-SOD and language in AS-NoSOD.
The primary sensory cortices are characterized by a topographical mapping of basic sensory features which is considered to deteriorate in higher-order areas in favor of complex sensory features. Recently, however, retinotopic maps were also discovered in the higher-order visual, parietal and prefrontal cortices. The discovery of these maps enabled the distinction between visual regions, clarified their function and hierarchical processing. Could such extension of topographical mapping to high-order processing regions apply to the auditory modality as well? This question has been studied previously in animal models but only sporadically in humans, whose anatomical and functional organization may differ from that of animals (e.g. unique verbal functions and Heschl's gyrus curvature). Here we applied fMRI spectral analysis to investigate the cochleotopic organization of the human cerebral cortex. We found multiple mirror-symmetric novel cochleotopic maps covering most of the core and high-order human auditory cortex, including regions considered non-cochleotopic, stretching all the way to the superior temporal sulcus. These maps suggest that topographical mapping persists well beyond the auditory core and belt, and that the mirror-symmetry of topographical preferences may be a fundamental principle across sensory modalities. PMID:21448274
Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.
Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599
Hearing loss is a common feature in human aging. It has been argued that dysfunctions in central processing are important contributing factors to hearing loss during older age. Aging also has well documented consequences for neural structure and function, but it is not clear how these effects interact with those that arise as a consequence of hearing loss. This paper reviews the effects of aging and adult-onset hearing loss in the structure and function of cortical auditory regions. The evidence reviewed suggests that aging and hearing loss result in atrophy of cortical auditory regions and stronger engagement of networks involved in the detection of salient events, adaptive control and re-allocation of attention. These cortical mechanisms are engaged during listening in effortful conditions in normal hearing individuals. Therefore, as a consequence of aging and hearing loss, all listening becomes effortful and cognitive load is constantly high, reducing the amount of available cognitive resources. This constant effortful listening and reduced cognitive spare capacity could be what accelerates cognitive decline in older adults with hearing loss. PMID:27242405
Lee, Christopher M; Osman, Ahmad F; Volgushev, Maxim; Escabí, Monty A; Read, Heather L
Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices.
Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris
In all sensory modalities, intracortical inhibition shapes the functional properties of cortical neurons but also influences the responses to natural stimuli. Studies performed in various species have revealed that auditory cortex neurons respond to conspecific vocalizations by temporal spike patterns displaying a high trial-to-trial reliability, which might result from precise timing between excitation and inhibition. Studying the guinea pig auditory cortex, we show that partial blockage of GABAA receptors by gabazine (GBZ) application (10 μm, a concentration that promotes expansion of cortical receptive fields) increased the evoked firing rate and the spike-timing reliability during presentation of communication sounds (conspecific and heterospecific vocalizations), whereas GABAB receptor antagonists [10 μm saclofen; 10–50 μm CGP55845 (p-3-aminopropyl-p-diethoxymethyl phosphoric acid)] had nonsignificant effects. Computing mutual information (MI) from the responses to vocalizations using either the evoked firing rate or the temporal spike patterns revealed that GBZ application increased the MI derived from the activity of single cortical site but did not change the MI derived from population activity. In addition, quantification of information redundancy showed that GBZ significantly increased redundancy at the population level. This result suggests that a potential role of intracortical inhibition is to reduce information redundancy during the processing of natural stimuli. PMID:23804094
Hancock, Kenneth E.; Sen, Kamal
Neurons in sensory brain regions shape our perception of the surrounding environment through two parallel operations: decomposition and integration. For example, auditory neurons decompose sounds by separately encoding their frequency, temporal modulation, intensity, and spatial location. Neurons also integrate across these various features to support a unified perceptual gestalt of an auditory object. At higher levels of a sensory pathway, neurons may select for a restricted region of feature space defined by the intersection of multiple, independent stimulus dimensions. To further characterize how auditory cortical neurons decompose and integrate multiple facets of an isolated sound, we developed an automated procedure that manipulated five fundamental acoustic properties in real time based on single-unit feedback in awake mice. Within several minutes, the online approach converged on regions of the multidimensional stimulus manifold that reliably drove neurons at significantly higher rates than predefined stimuli. Optimized stimuli were cross-validated against pure tone receptive fields and spectrotemporal receptive field estimates in the inferior colliculus and primary auditory cortex. We observed, from midbrain to cortex, increases in both level invariance and frequency selectivity, which may underlie equivalent sparseness of responses in the two areas. We found that onset and steady-state spike rates increased proportionately as the stimulus was tailored to the multidimensional receptive field. By separately evaluating the amount of leverage each sound feature exerted on the overall firing rate, these findings reveal interdependencies between stimulus features as well as hierarchical shifts in selectivity and invariance that may go unnoticed with traditional approaches. PMID:24990917
Chambers, Anna R; Hancock, Kenneth E; Sen, Kamal; Polley, Daniel B
Neurons in sensory brain regions shape our perception of the surrounding environment through two parallel operations: decomposition and integration. For example, auditory neurons decompose sounds by separately encoding their frequency, temporal modulation, intensity, and spatial location. Neurons also integrate across these various features to support a unified perceptual gestalt of an auditory object. At higher levels of a sensory pathway, neurons may select for a restricted region of feature space defined by the intersection of multiple, independent stimulus dimensions. To further characterize how auditory cortical neurons decompose and integrate multiple facets of an isolated sound, we developed an automated procedure that manipulated five fundamental acoustic properties in real time based on single-unit feedback in awake mice. Within several minutes, the online approach converged on regions of the multidimensional stimulus manifold that reliably drove neurons at significantly higher rates than predefined stimuli. Optimized stimuli were cross-validated against pure tone receptive fields and spectrotemporal receptive field estimates in the inferior colliculus and primary auditory cortex. We observed, from midbrain to cortex, increases in both level invariance and frequency selectivity, which may underlie equivalent sparseness of responses in the two areas. We found that onset and steady-state spike rates increased proportionately as the stimulus was tailored to the multidimensional receptive field. By separately evaluating the amount of leverage each sound feature exerted on the overall firing rate, these findings reveal interdependencies between stimulus features as well as hierarchical shifts in selectivity and invariance that may go unnoticed with traditional approaches.
Barbour, Dennis L.; Wang, Xiaoqin
Contrary to findings in subcortical auditory nuclei, auditory cortex neurons have traditionally been described as spiking only at the onsets of simple sounds such as pure tones or bandpass noise and to acoustic transients in complex sounds. Furthermore, primary auditory cortex (A1) has traditionally been described as mostly tone responsive and the lateral belt area of primates as mostly noise responsive. The present study was designed to unify the study of these two cortical areas using random spectrum stimuli (RSS), a new class of parametric, wideband, stationary acoustic stimuli. We found that 60% of all neurons encountered in A1 and the lateral belt of awake marmoset monkeys (Callithrix jacchus) showed significant changes in firing rates in response to RSS. Of these, 89% showed sustained spiking in response to one or more individual RSS, a substantially greater percentage than would be expected from traditional studies, indicating that RSS are well suited for studying these two cortical areas. When firing rates elicited by RSS were used to construct linear estimates of frequency tuning for these sustained responders, the shape of the estimate function remained relatively constant throughout the stimulus interval and across the stimulus properties of mean sound level, spectral density, and spectral contrast. This finding indicates that frequency tuning computed from RSS reflects a robust estimate of the actual tuning of a neuron. Use of this estimate to predict rate responses to other RSS, however, yielded poor results, implying that auditory cortex neurons integrate information across frequency nonlinearly. No systematic difference in prediction quality between A1 and the lateral belt could be detected. PMID:12904480
Kotilahti, Kalle; Nissila, Ilkka; Makela, Riikka; Noponen, Tommi; Lipiainen, Lauri; Gavrielides, Nasia; Kajava, Timo; Huotilainen, Minna; Fellman, Vineta; Merilainen, Pekka; Katila, Toivo
We have used near-infrared spectroscopy (NIRS) to study hemodynamic auditory evoked responses on 7 full-term neonates. Measurements were done simultaneously above both auditory cortices to study the distribution of speech and music processing between hemispheres using a 16-channel frequency-domain instrument. The stimulation consisted of 5-second samples of music and speech with a 25-second silent interval. In response to stimulation, a significant increase in the concentration of oxygenated hemoglobin ([HbO2]) was detected in 6 out of 7 subjects. The strongest responses in [HbO2] were seen near the measurement location above the ear on both hemispheres. The mean latency of the maximum responses was 9.42+/-1.51 s. On the left hemisphere (LH), the maximum amplitude of the average [HbO2] response to the music stimuli was 0.76+/- 0.38 μ M (mean+/-std.) and to the speech stimuli 1.00+/- 0.45 μ+/- μM. On the right hemisphere (RH), the maximum amplitude of the average [HbO2] response was 1.29+/- 0.85 μM to the music stimuli and 1.23+/- 0.93 μM to the speech stimuli. The results indicate that auditory information is processed on both auditory cortices, but LH is more concentrated to process speech than music information. No significant differences in the locations and the latencies of the maximum responses relative to the stimulus type were found.
Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning
Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus—tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy
Coffey, Emily B J; Herholz, Sibylle C; Chepesiuk, Alexander M P; Baillet, Sylvain; Zatorre, Robert J
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation.
Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409
Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D
The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.
Crosse, Michael J; Butler, John S; Lalor, Edmund C
Congruent audiovisual speech enhances our ability to comprehend a speaker, even in noise-free conditions. When incongruent auditory and visual information is presented concurrently, it can hinder a listener's perception and even cause him or her to perceive information that was not presented in either modality. Efforts to investigate the neural basis of these effects have often focused on the special case of discrete audiovisual syllables that are spatially and temporally congruent, with less work done on the case of natural, continuous speech. Recent electrophysiological studies have demonstrated that cortical response measures to continuous auditory speech can be easily obtained using multivariate analysis methods. Here, we apply such methods to the case of audiovisual speech and, importantly, present a novel framework for indexing multisensory integration in the context of continuous speech. Specifically, we examine how the temporal and contextual congruency of ongoing audiovisual speech affects the cortical encoding of the speech envelope in humans using electroencephalography. We demonstrate that the cortical representation of the speech envelope is enhanced by the presentation of congruent audiovisual speech in noise-free conditions. Furthermore, we show that this is likely attributable to the contribution of neural generators that are not particularly active during unimodal stimulation and that it is most prominent at the temporal scale corresponding to syllabic rate (2-6 Hz). Finally, our data suggest that neural entrainment to the speech envelope is inhibited when the auditory and visual streams are incongruent both temporally and contextually. Seeing a speaker's face as he or she talks can greatly help in understanding what the speaker is saying. This is because the speaker's facial movements relay information about what the speaker is saying, but also, importantly, when the speaker is saying it. Studying how the brain uses this timing relationship to
Golding, Maryanne; Pearce, Wendy; Seymour, John; Cooper, Alison; Ching, Teresa; Dillon, Harvey
Finding ways to evaluate the success of hearing aid fittings in young infants has increased in importance with the implementation of hearing screening programs. Cortical auditory evoked potentials (CAEP) can be recorded in infants and provides evidence for speech detection at the cortical level. The validity of this technique as a tool of hearing aid evaluation needs, however, to be demonstrated. The present study examined the relationship between the presence/absence of CAEPs to speech stimuli and the outcomes of a parental questionnaire in young infants who were fitted with hearing aids. The presence/absence of responses was determined by an experienced examiner as well as by a statistical measure, Hotelling's T(2). A statistically significant correlation between CAEPs and questionnaire scores was found using the examiner's grading (rs = 0.45) and using the statistical grading (rs = 0.41), and there was reasonably good agreement between traditional response detection methods and the statistical analysis.
Palomäki, Kalle J; Tiitinen, Hannu; Mäkinen, Ville; May, Patrick; Alku, Paavo
We used magnetoencephalographic (MEG) measurements to study how speech sounds presented in a realistic spatial sound environment are processed in human cortex. A spatial sound environment was created by utilizing head-related transfer functions (HRTFs), and using a vowel, a pseudo-vowel, and a wide-band noise burst as stimuli. The behaviour of the most prominent auditory response, the cortically generated N1m, was investigated above the left and right hemisphere. We found that the N1m responses elicited by the vowel and by the pseudo-vowel were much larger in amplitude than those evoked by the noise burst. Corroborating previous observations, we also found that cortical activity reflecting the processing of spatial sound was more pronounced in the right than in the left hemisphere for all of the stimulus types and that both hemispheres exhibited contralateral tuning to sound direction.
Leach, Nicholas D; Nodal, Fernando R; Cordery, Patricia M; King, Andrew J; Bajo, Victoria M
The nucleus basalis (NB) in the basal forebrain provides most of the cholinergic input to the neocortex and has been implicated in a variety of cognitive functions related to the processing of sensory stimuli. However, the role that cortical acetylcholine release plays in perception remains unclear. Here we show that selective loss of cholinergic NB neurons that project to the cortex reduces the accuracy with which ferrets localize brief sounds and prevents them from adaptively reweighting auditory localization cues in response to chronic occlusion of one ear. Cholinergic input to the cortex was disrupted by making bilateral injections of the immunotoxin ME20.4-SAP into the NB. This produced a substantial loss of both p75 neurotrophin receptor (p75(NTR))-positive and choline acetyltransferase-positive cells in this region and of acetylcholinesterase-positive fibers throughout the auditory cortex. These animals were significantly impaired in their ability to localize short broadband sounds (40-500 ms in duration) in the horizontal plane, with larger cholinergic cell lesions producing greater performance impairments. Although they localized longer sounds with normal accuracy, their response times were significantly longer than controls. Ferrets with cholinergic forebrain lesions were also less able to relearn to localize sound after plugging one ear. In contrast to controls, they exhibited little recovery of localization performance after behavioral training. Together, these results show that cortical cholinergic inputs contribute to the perception of sound source location under normal hearing conditions and play a critical role in allowing the auditory system to adapt to changes in the spatial cues available.
Kimura, A; Imbe, H; Donishi, T
In the rat cortex, the two non-primary auditory areas, posterodorsal and ventral auditory areas, may constitute the two streams of auditory processing in their distinct projections to the posterior parietal and insular cortices. The posterior parietal cortex is considered crucial for auditory spatial processing and directed attention, while possible auditory function of the insular cortex is largely unclear. In this study, we electrophysiologically delineated an auditory area in the caudal part of the granular insular cortex (insular auditory area, IA) and examined efferent connections of IA with anterograde tracer biocytin to deduce the functional significance of IA. IA projected to the rostral agranular insular cortex, a component of the lateral prefrontal cortex. IA also projected to the adjacent dysgranular insular cortex and the caudal agranular insular cortex and sent feedback projections to cortical layer I of the primary and secondary somatosensory areas. Corticofugal projections terminated in auditory, somatosensory and visceral thalamic nuclei, and the bottom of the thalamic reticular nucleus that could overlap the visceral sector. The ventral part of the caudate putamen, the external cortex of the inferior colliculus and the central amygdaloid nucleus were also the main targets. IA exhibited neural response to transcutaneous electrical stimulation of the forepaw in addition to acoustic stimulation (noise bursts and pure tones). The results suggest that IA subserves diverse functions associated with somatosensory, nociceptive and visceral processing that may underlie sound-driven emotional and autonomic responses. IA, being potentially involved in such extensive cross-modal sensory interactions, could also be an important anatomical node of auditory processing linked to higher neural processing in the prefrontal cortex.
Squires, K. C.; Squires, N. K.; Hillyard, S. A.
Cortical-evoked potentials were recorded from human subjects performing an auditory detection task with confidence rating responses. Unlike earlier studies that used similar procedures, the observation interval during which the auditory signal could occur was clearly marked by a visual cue light. By precisely defining the observation interval and, hence, synchronizing all perceptual decisions to the evoked potential averaging epoch, it was possible to demonstrate that high-confidence false alarms are accompanied by late-positive P3 components equivalent to those for equally confident hits. Moreover the hit and false alarm evoked potentials were found to covary similarly with variations in confidence rating and to have similar amplitude distributions over the scalp. In a second experiment, it was demonstrated that correct rejections can be associated with a P3 component larger than that for hits. Thus it was possible to show, within the signal detection paradigm, how the two major factors of decision confidence and expectancy are reflected in the P3 component of the cortical-evoked potential.
Zhu, Xiaoqing; Wang, Fang; Hu, Huifang; Sun, Xinde; Kilgard, Michael P.; Merzenich, Michael M.
It has previously been shown that environmental enrichment can enhance structural plasticity in the brain and thereby improve cognitive and behavioral function. In this study, we reared developmentally noise-exposed rats in an acoustic-enriched environment for ∼4 weeks to investigate whether or not enrichment could restore developmentally degraded behavioral and neuronal processing of sound frequency. We found that noise-exposed rats had significantly elevated sound frequency discrimination thresholds compared with age-matched naive rats. Environmental acoustic enrichment nearly restored to normal the behavioral deficit resulting from early disrupted acoustic inputs. Signs of both degraded frequency selectivity of neurons as measured by the bandwidth of frequency tuning curves and decreased long-term potentiation of field potentials recorded in the primary auditory cortex of these noise-exposed rats also were reversed partially. The observed behavioral and physiological effects induced by enrichment were accompanied by recovery of cortical expressions of certain NMDA and GABAA receptor subunits and brain-derived neurotrophic factor. These studies in a rodent model show that environmental acoustic enrichment promotes recovery from early noise-induced auditory cortical dysfunction and indicate a therapeutic potential of this noninvasive approach for normalizing neurological function from pathologies that cause hearing and associated language impairments in older children and adults. PMID:24741032
Otsuru, Naofumi; Tsuruhara, Aki; Motomura, Eishi; Tanii, Hisashi; Nishihara, Makoto; Inui, Koji; Kakigi, Ryusuke
Nicotine is known to have enhancing effects on some aspects of attention and cognition. The purpose of the present study was to elucidate the effects of nicotine on pre-attentive change-related cortical activity. Change-related cortical activity in response to an abrupt increase (3 dB) and decrease (6 dB) in sound pressure in a continuous sound was recorded by using magnetoencephalography. Nicotine was administered with a nicotine gum (4 mg of nicotine). Eleven healthy nonsmokers were tested with a double-blind and placebo-controlled design. Effects of nicotine on the main component of the onset response peaking at around 50 ms (P50m) and the main component of the change-related response at around 120 ms (Change-N1m) were investigated. Nicotine failed to affect P50m, while it significantly increased the amplitude of Change-N1m evoked by both auditory changes. The magnitude of the amplitude increase was similar among subjects regardless of the magnitude of the baseline response, which resulted in the percent increase of Change-N1m being greater for subjects with Change-N1m of smaller amplitude. Since Change-N1m represents a pre-attentive automatic process to encode new auditory events, the present results suggest that nicotine can exert beneficial cognitive effects without a direct impact on attention.
Takahashi, Hirokazu; Yokota, Ryo; Suzrikawa, Jun; Kanzaki, Ryohei
Intrinsic plastic properties in the auditory cortex can cause dynamic remodeling of the functional organization according to trainings. Neurorehabilitation will therefore potentially benefit from electrical stimulation that can modify synaptic strength as desired. Here we show that the auditory cortex of rats can be modified by intracortical microstimulation (ICMS) associated with tone stimuli on the basis of the spike time-dependent plasticity (STDP). Two kinds of ICMS were applied; a pairing ICMS following a tone-induced excitatory synaptic input and an anti-paring ICMS preceding a tone-induced input. The pairing and anti-pairing ICMS produced potentiation and depression, respectively, in responses to the paired tones with a particular test frequency, and thereby modified the tuning property of the auditory cortical neurons. In addition, we demonstrated that our experimental setup has a potential to directly measure how anesthetic agents and pharmacological manipulation affect ICMS-induced plasticity, and thus will serve as a powerful platform to investigate the neural basis of the plasticity.
McDermott, Josh H; Oxenham, Andrew J
The perception of music depends on many culture-specific factors, but is also constrained by properties of the auditory system. This has been best characterized for those aspects of music that involve pitch. Pitch sequences are heard in terms of relative as well as absolute pitch. Pitch combinations give rise to emergent properties not present in the component notes. In this review we discuss the basic auditory mechanisms contributing to these and other perceptual effects in music.
Rissling, Anthony J.; Miyakoshi, Makoto; Sugar, Catherine A.; Braff, David L.; Makeig, Scott; Light, Gregory A.
Although sensory processing abnormalities contribute to widespread cognitive and psychosocial impairments in schizophrenia (SZ) patients, scalp-channel measures of averaged event-related potentials (ERPs) mix contributions from distinct cortical source-area generators, diluting the functional relevance of channel-based ERP measures. SZ patients (n = 42) and non-psychiatric comparison subjects (n = 47) participated in a passive auditory duration oddball paradigm, eliciting a triphasic (Deviant−Standard) tone ERP difference complex, here termed the auditory deviance response (ADR), comprised of a mid-frontal mismatch negativity (MMN), P3a positivity, and re-orienting negativity (RON) peak sequence. To identify its cortical sources and to assess possible relationships between their response contributions and clinical SZ measures, we applied independent component analysis to the continuous 68-channel EEG data and clustered the resulting independent components (ICs) across subjects on spectral, ERP, and topographic similarities. Six IC clusters centered in right superior temporal, right inferior frontal, ventral mid-cingulate, anterior cingulate, medial orbitofrontal, and dorsal mid-cingulate cortex each made triphasic response contributions. Although correlations between measures of SZ clinical, cognitive, and psychosocial functioning and standard (Fz) scalp-channel ADR peak measures were weak or absent, for at least four IC clusters one or more significant correlations emerged. In particular, differences in MMN peak amplitude in the right superior temporal IC cluster accounted for 48% of the variance in SZ-subject performance on tasks necessary for real-world functioning and medial orbitofrontal cluster P3a amplitude accounted for 40%/54% of SZ-subject variance in positive/negative symptoms. Thus, source-resolved auditory deviance response measures including MMN may be highly sensitive to SZ clinical, cognitive, and functional characteristics. PMID:25379456
Bonte, Milene; Hausfeld, Lars; Scharke, Wolfgang; Valente, Giancarlo; Formisano, Elia
Selective attention to relevant sound properties is essential for everyday listening situations. It enables the formation of different perceptual representations of the same acoustic input and is at the basis of flexible and goal-dependent behavior. Here, we investigated the role of the human auditory cortex in forming behavior-dependent representations of sounds. We used single-trial fMRI and analyzed cortical responses collected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by different speakers (boy, girl, male) and performed a delayed-match-to-sample task on either speech sound or speaker identity. Univariate analyses showed a task-specific activation increase in the right superior temporal gyrus/sulcus (STG/STS) during speaker categorization and in the right posterior temporal cortex during vowel categorization. Beyond regional differences in activation levels, multivariate classification of single trial responses demonstrated that the success with which single speakers and vowels can be decoded from auditory cortical activation patterns depends on task demands and subject's behavioral performance. Speaker/vowel classification relied on distinct but overlapping regions across the (right) mid-anterior STG/STS (speakers) and bilateral mid-posterior STG/STS (vowels), as well as the superior temporal plane including Heschl's gyrus/sulcus. The task dependency of speaker/vowel classification demonstrates that the informative fMRI response patterns reflect the top-down enhancement of behaviorally relevant sound representations. Furthermore, our findings suggest that successful selection, processing, and retention of task-relevant sound properties relies on the joint encoding of information across early and higher-order regions of the auditory cortex.
Lee, Adrian K. C.; Rajaram, Siddharth; Xia, Jing; Bharadwaj, Hari; Larson, Eric; Hämäläinen, Matti S.; Shinn-Cunningham, Barbara G.
In order to extract information in a rich environment, we focus on different features that allow us to direct attention to whatever source is of interest. The cortical network deployed during spatial attention, especially in vision, is well characterized. For example, visuospatial attention engages a frontoparietal network including the frontal eye fields (FEFs), which modulate activity in visual sensory areas to enhance the representation of an attended visual object. However, relatively little is known about the neural circuitry controlling attention directed to non-spatial features, or to auditory objects or features (either spatial or non-spatial). Here, using combined magnetoencephalography (MEG) and anatomical information obtained from MRI, we contrasted cortical activity when observers attended to different auditory features given the same acoustic mixture of two simultaneous spoken digits. Leveraging the fine temporal resolution of MEG, we establish that activity in left FEF is enhanced both prior to and throughout the auditory stimulus when listeners direct auditory attention to target location compared to when they focus on target pitch. In contrast, activity in the left posterior superior temporal sulcus (STS), a region previously associated with auditory pitch categorization, is greater when listeners direct attention to target pitch rather than target location. This differential enhancement is only significant after observers are instructed which cue to attend, but before the acoustic stimuli begin. We therefore argue that left FEF participates more strongly in directing auditory spatial attention, while the left STS aids auditory object selection based on the non-spatial acoustic feature of pitch. PMID:23335874
Rose, H J; Metherate, R
Stimulation of the medial geniculate body in an auditory thalamocortical slice elicits a short-latency current sink in the middle cortical layers, as would be expected following activation of thalamocortical relay neurons. However, corticothalamic neurons can have axon collaterals that project to the middle layers, thus, a middle-layer current sink could also result from antidromic activation of corticothalamic neurons and their axon collaterals. The likelihood of thalamic stimulation activating corticothalamic neurons would be reduced substantially if the corticothalamic pathway was not well preserved in the slice, and/or if the threshold for antidromic activation was significantly higher than for orthodromic activation. To determine the prevalence and threshold of antidromic activation, we recorded intracellularly from day 14-17 mouse brain slices containing infragranular cortical neurons while stimulating the medial geniculate or thalamocortical pathway. Antidromic spikes were confirmed by spike collision and characterized according to spike latency "jitter" and the ability to follow a high-frequency (100 Hz) stimulus train. The ability to follow a 100-Hz tetanus was a reliable indicator of antidromic activation, but both antidromic and orthodromic spikes could have low jitter. Thalamic stimulation produced antidromic activation in two of 69 infragranular cortical neurons (<3%), indicating the presence of antidromic activity, but implying a limited corticothalamic connection in the slice. Antidromic spikes in 13 additional neurons were obtained by stimulating axons in the thalamocortical pathway. The antidromic threshold averaged 214+/-40.6 microA (range 6-475 microA), over seven times the orthodromic threshold for medial geniculate-evoked responses in layer IV extracellular (28+/-5.4 microA) or intracellular (27+/-5.6 microA) recordings. We conclude that medial geniculate stimulation activates relatively few corticothalamic neurons. Conversely, low
Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Başkent, Deniz; Formisano, Elia; Esposito, Fabrizio
Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review, see Bregman, 1990; Warren, 1999). The neural mechanisms underlying this continuity illusion have been studied mostly with schematic stimuli (e.g., simple tones) and are still a matter of debate (for review, see Petkov and Sutter, 2011). The goal of the present study was to elucidate how these mechanisms operate under more natural conditions. Using psychophysics and electroencephalography (EEG), we assessed simultaneously the perceived continuity of a human vowel sound through interrupting noise and the concurrent neural activity. We found that vowel continuity illusions were accompanied by a suppression of the 4 Hz EEG power in auditory cortex (AC) that was evoked by the vowel interruption. This suppression was stronger than the suppression accompanying continuity illusions of a simple tone. Finally, continuity perception and 4 Hz power depended on the intactness of the sound that preceded the vowel (i.e., the auditory context). These findings show that a natural sound may be restored during noise due to the suppression of 4 Hz AC activity evoked early during the noise. This mechanism may attenuate sudden pitch changes, adapt the resistance of the auditory system to extraneous sounds across auditory scenes, and provide a useful model for assisted hearing devices.
Boscariol, Mirela; Guimarães, Catarina Abraão; Hage, Simone R de Vasconcellos; Garcia, Vera Lucia; Schmutzler, Kátia M R; Cendes, Fernando; Guerreiro, Marilisa Mantovani
Malformations of cortical development have been described in children and families with language-learning impairment. The objective of this study was to assess the auditory processing information in children with language-learning impairment in the presence or absence of a malformation of cortical development in the auditory processing areas. We selected 32 children (19 males), aged eight to 15 years, divided into three groups: Group I comprised 11 children with language-learning impairment and bilateral perisylvian polymicrogyria, Group II comprised 10 children with language-learning impairment and normal MRI, and Group III comprised 11 normal children. Behavioral auditory tests, such as the Random Gap Detection Test and Digits Dichotic Test were performed. Statistical analysis was performed using the Kruskal-Wallis test and Mann-Whitney test, with a level of significance of 0.05. The results revealed a statistically significant difference among the groups. Our data showed abnormalities in auditory processing of children in Groups I and II when compared with the control group, with children in Group I being more affected than children in Group II. Our data showed that the presence of a cortical malformation correlates with a worse performance in some tasks of auditory processing function.
Chabot, Nicole; Butler, Blake E; Lomber, Stephen G
Following sensory deprivation, primary somatosensory and visual cortices undergo crossmodal plasticity, which subserves the remaining modalities. However, controversy remains regarding the neuroplastic potential of primary auditory cortex (A1). To examine this, we identified cortical and thalamic projections to A1 in hearing cats and those with early- and late-onset deafness. Following early deafness, inputs from second auditory cortex (A2) are amplified, whereas the number originating in the dorsal zone (DZ) decreases. In addition, inputs from the dorsal medial geniculate nucleus (dMGN) increase, whereas those from the ventral division (vMGN) are reduced. In late-deaf cats, projections from the anterior auditory field (AAF) are amplified, whereas those from the DZ decrease. Additionally, in a subset of early- and late-deaf cats, area 17 and the lateral posterior nucleus (LP) of the visual thalamus project concurrently to A1. These results demonstrate that patterns of projections to A1 are modified following deafness, with statistically significant changes occurring within the auditory thalamus and some cortical areas. Moreover, we provide anatomical evidence for small-scale crossmodal changes in projections to A1 that differ between early- and late-onset deaf animals, suggesting that potential crossmodal activation of primary auditory cortex differs depending on the age of deafness onset.
Straka, Małgorzata M.; McMahon, Melissa; Markovitz, Craig D.; Lim, Hubert H.
Objective. An increasing number of deaf individuals are being implanted with central auditory prostheses, but their performance has generally been poorer than for cochlear implant users. The goal of this study is to investigate stimulation strategies for improving hearing performance with a new auditory midbrain implant (AMI). Previous studies have shown that repeated electrical stimulation of a single site in each isofrequency lamina of the central nucleus of the inferior colliculus (ICC) causes strong suppressive effects in elicited responses within the primary auditory cortex (A1). Here we investigate if improved cortical activity can be achieved by co-activating neurons with different timing and locations across an ICC lamina and if this cortical activity varies across A1. Approach. We electrically stimulated two sites at different locations across an isofrequency ICC lamina using varying delays in ketamine-anesthetized guinea pigs. We recorded and analyzed spike activity and local field potentials across different layers and locations of A1. Results. Co-activating two sites within an isofrequency lamina with short inter-pulse intervals (<5 ms) could elicit cortical activity that is enhanced beyond a linear summation of activity elicited by the individual sites. A significantly greater extent of normalized cortical activity was observed for stimulation of the rostral-lateral region of an ICC lamina compared to the caudal-medial region. We did not identify any location trends across A1, but the most cortical enhancement was observed in supragranular layers, suggesting further integration of the stimuli through the cortical layers. Significance. The topographic organization identified by this study provides further evidence for the presence of functional zones across an ICC lamina with locations consistent with those identified by previous studies. Clinically, these results suggest that co-activating different neural populations in the rostral-lateral ICC rather
Tremblay, Marie-Ève; Zettel, Martha L; Ison, James R; Allen, Paul D; Majewska, Ania K
Normal aging is often accompanied by a progressive loss of receptor sensitivity in hearing and vision, whose consequences on cellular function in cortical sensory areas have remained largely unknown. By examining the primary auditory (A1) and visual (V1) cortices in two inbred strains of mice undergoing either age-related loss of audition (C57BL/6J) or vision (CBA/CaJ), we were able to describe cellular and subcellular changes that were associated with normal aging (occurring in A1 and V1 of both strains) or specifically with age-related sensory loss (only in A1 of C57BL/6J or V1 of CBA/CaJ), using immunocytochemical electron microscopy and light microscopy. While the changes were subtle in neurons, glial cells and especially microglia were transformed in aged animals. Microglia became more numerous and irregularly distributed, displayed more variable cell body and process morphologies, occupied smaller territories, and accumulated phagocytic inclusions that often displayed ultrastructural features of synaptic elements. Additionally, evidence of myelination defects were observed, and aged oligodendrocytes became more numerous and were more often encountered in contiguous pairs. Most of these effects were profoundly exacerbated by age-related sensory loss. Together, our results suggest that the age-related alteration of glial cells in sensory cortical areas can be accelerated by activity-driven central mechanisms that result from an age-related loss of peripheral sensitivity. In light of our observations, these age-related changes in sensory function should be considered when investigating cellular, cortical, and behavioral functions throughout the lifespan in these commonly used C57BL/6J and CBA/CaJ mouse models.
Lewis, James W; Talkington, William J; Tallaksen, Katherine C; Frum, Chris A
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and "auditory objects" can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more "object-like," independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds-a quantitative measure of change in entropy of the acoustic signals over time-and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the
van Dam, Wessel O; van Dongen, Eelco V; Bekkering, Harold; Rueschemeyer, Shirley-Ann
Embodied theories hold that cognitive concepts are grounded in our sensorimotor systems. Specifically, a number of behavioral and neuroimaging studies have buttressed the idea that language concepts are represented in areas involved in perception and action [Pulvermueller, F. Brain mechanisms linking language and action. Nature Reviews Neuroscience, 6, 576-582, 2005; Barsalou, L. W. Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577-660, 1999]. Proponents of a strong embodied account argue that activity in perception/action areas is triggered automatically upon encountering a word and reflect static semantic representations. In contrast to what would be expected if lexical semantic representations are automatically triggered upon encountering a word, a number of studies failed to find motor-related activity for words with a putative action-semantic component [Raposo, A., Moss, H. E., Stamatakis, E. A., & Tyler, L. K. Modulation of motor and premotor cortices by actions, action words and action sentences. Neuropsychologia, 47, 388-396, 2009; Rueschemeyer, S.-A., Brass, M., & Friederici, A. D. Comprehending prehending: Neural correlates of processing verbs with motor stems. Journal of Cognitive Neuroscience, 19, 855-865, 2007]. In a recent fMRI study, Van Dam and colleagues [Van Dam, W. O., Van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical-semantic representations. Human Brain Mapping, in press] showed that the degree to which a modality-specific region contributes to a representation considerably changes as a function of context. In the current study, we presented words for which both motor and visual properties (e.g., tennis ball, boxing glove) were important in constituting the concept. Our aim was to corroborate on earlier findings of flexible and context-dependent language representations by testing whether functional integration between auditory brain regions and perception/action areas is modulated by context
Sherman, S. Murray
Little is known regarding the synaptic properties of corticocortical connections from one cortical area to another. To expand on this knowledge, we assessed the synaptic properties of excitatory projections from the primary to secondary auditory cortex and vice versa. We identified 2 types of postsynaptic responses. The first class of responses have larger initial excitatory postsynaptic potentials (EPSPs), exhibit paired-pulse depression, are limited to ionotropic glutamate receptor activation, and have larger synaptic terminals; the second has smaller initial EPSPs, paired-pulse facilitation, metabotropic glutamate receptor activation, and smaller synaptic terminals. These responses are similar to the driver and modulator properties previously identified for thalamic and thalamocortical circuitry, suggesting that the same classification may extend to corticocortical inputs and have an implication for the functional organization of corticocortical circuits. PMID:21385835
Bieszczad, Kasia M; Weinberger, Norman M
Associative memory for auditory-cued events involves specific plasticity in the primary auditory cortex (A1) that facilitates responses to tones which gain behavioral significance, by modifying representational parameters of sensory coding. Learning strategy, rather than the amount or content of learning, can determine this learning-induced cortical (high order) associative representational plasticity (HARP). Thus, tone-contingent learning with signaled errors can be accomplished either by (1) responding only during tone duration ("tone-duration" strategy, T-Dur), or (2) responding from tone onset until receiving an error signal for responses made immediately after tone offset ("tone-onset-to-error", TOTE). While rats using both strategies achieve the same high level of performance, only those using the TOTE strategy develop HARP, viz., frequency-specific decreased threshold (increased sensitivity) and decreased bandwidth (increased selectivity) (Berlau & Weinberger, 2008). The present study challenged the generality of learning strategy by determining if high motivation dominates in the formation of HARP. Two groups of adult male rats were trained to bar-press during a 5.0kHz (10s, 70dB) tone for a water reward under either high (HiMot) or moderate (ModMot) levels of motivation. The HiMot group achieved a higher level of correct performance. However, terminal mapping of A1 showed that only the ModMot group developed HARP, i.e., increased sensitivity and selectivity in the signal-frequency band. Behavioral analysis revealed that the ModMot group used the TOTE strategy while HiMot subjects used the T-Dur strategy. Thus, type of learning strategy, not level of learning or motivation, is dominant for the formation of cortical plasticity.
Wilson, Tony W.; Hernandez, Olivia O.; Asherin, Ryan M.; Teale, Peter D.; Reite, Martin L.; Rojas, Donald C.
Neurobiological theories of schizophrenia and related psychoses have increasingly emphasized impaired neuronal coordination (i.e., dysfunctional connectivity) as central to the pathophysiology. Although neuroimaging evidence has mostly corroborated these accounts, the basic mechanism(s) of reduced functional connectivity remains elusive. In this study, we examine the developmental trajectory and underlying mechanism(s) of dysfunctional connectivity by using gamma oscillatory power as an index of local and long-range circuit integrity. An early-onset psychosis group and a matched cohort of typically-developing adolescents listened to monaurally presented click-trains, as whole-head magnetoencephalography data were acquired. Consistent with previous work, gamma-band power was significantly higher in right auditory cortices across groups and conditions. However, patients exhibited significantly reduced overall gamma power relative to controls, and showed a reduced ear-of-stimulation effect indicating that ipsi-versus contralateral presentation had less impact on hemispheric power. Gamma-frequency oscillations are thought to be dependent on GABA-ergic interneuronal networks, thus these patients’s impairment in generating and/or maintaining such activity may indicate that local circuit integrity is at least partially compromised early in the disease process. In addition, patients also showed abnormality in long-range networks (i.e., ear-of-stimulation effects) potentially suggesting that multiple stages along auditory pathways contribute to connectivity aberrations found in patients with psychosis. PMID:17557901
Wilson, Tony W; Hernandez, Olivia O; Asherin, Ryan M; Teale, Peter D; Reite, Martin L; Rojas, Donald C
Neurobiological theories of schizophrenia and related psychoses have increasingly emphasized impaired neuronal coordination (i.e., dysfunctional connectivity) as central to the pathophysiology. Although neuroimaging evidence has mostly corroborated these accounts, the basic mechanism(s) of reduced functional connectivity remains elusive. In this study, we examine the developmental trajectory and underlying mechanism(s) of dysfunctional connectivity by using gamma oscillatory power as an index of local and long-range circuit integrity. An early-onset psychosis group and a matched cohort of typically developing adolescents listened to monaurally presented click-trains, as whole-head magnetoencephalography data were acquired. Consistent with previous work, gamma-band power was significantly higher in right auditory cortices across groups and conditions. However, patients exhibited significantly reduced overall gamma power relative to controls, and showed a reduced ear-of-stimulation effect indicating that ipsi- versus contralateral presentation had less impact on hemispheric power. Gamma-frequency oscillations are thought to be dependent on gamma-aminobutyric acidergic interneuronal networks, thus these patients' impairment in generating and/or maintaining such activity may indicate that local circuit integrity is at least partially compromised early in the disease process. In addition, patients also showed abnormality in long-range networks (i.e., ear-of-stimulation effects) potentially suggesting that multiple stages along auditory pathways contribute to connectivity aberrations found in patients with psychosis.
O'Connor, Kevin N.; Yin, Pingbo; Petkov, Christopher I.; Sutter, Mitchell L.
The focus of most research on auditory cortical neurons has concerned the effects of rather simple stimuli, such as pure tones or broad-band noise, or the modulation of a single acoustic parameter. Extending these findings to feature coding in more complex stimuli such as natural sounds may be difficult, however. Generalizing results from the simple to more complex case may be complicated by non-linear interactions occurring between multiple, simultaneously varying acoustic parameters in complex sounds. To examine this issue in the frequency domain, we performed a parametric study of the effects of two global features, spectral pattern (here ripple frequency) and bandwidth, on primary auditory (A1) neurons in awake macaques. Most neurons were tuned for one or both variables and most also displayed an interaction between bandwidth and pattern implying that their effects were conditional or interdependent. A spectral linear filter model was able to qualitatively reproduce the basic effects and interactions, indicating that a simple neural mechanism may be able to account for these interdependencies. Our results suggest that the behavior of most A1 neurons is likely to depend on multiple parameters, and so most are unlikely to respond independently or invariantly to specific acoustic features. PMID:21152347
Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke
Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG). The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m). The latency of Off-P50m depended on the inter-stimulus interval (ISI) of the click train, which was the longest at 40 ms (25 Hz) and became shorter with shorter ISIs (2.5∼20 ms). The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.
Fujioka, Takako; Ross, Bernhard; Kakigi, Ryusuke; Pantev, Christo; Trainor, Laurel J
Auditory evoked responses to a violin tone and a noise-burst stimulus were recorded from 4- to 6-year-old children in four repeated measurements over a 1-year period using magnetoencephalography (MEG). Half of the subjects participated in musical lessons throughout the year; the other half had no music lessons. Auditory evoked magnetic fields showed prominent bilateral P100m, N250m, P320m and N450m peaks. Significant change in the peak latencies of all components except P100m was observed over time. Larger P100m and N450m amplitude as well as more rapid change of N250m amplitude and latency was associated with the violin rather than the noise stimuli. Larger P100m and P320m peak amplitudes in the left hemisphere than in the right are consistent with left-lateralized cortical development in this age group. A clear musical training effect was expressed in a larger and earlier N250m peak in the left hemisphere in response to the violin sound in musically trained children compared with untrained children. This difference coincided with pronounced morphological change in a time window between 100 and 400 ms, which was observed in musically trained children in response to violin stimuli only, whereas in untrained children a similar change was present regardless of stimulus type. This transition could be related to establishing a neural network associated with sound categorization and/or involuntary attention, which can be altered by music learning experience.
Krause, Bryan M.; Raz, Aeyal; Uhlrich, Daniel J.; Smith, Philip H.; Banks, Matthew I.
The state of the sensory cortical network can have a profound impact on neural responses and perception. In rodent auditory cortex, sensory responses are reported to occur in the context of network events, similar to brief UP states, that produce “packets” of spikes and are associated with synchronized synaptic input (Bathellier et al., 2012; Hromadka et al., 2013; Luczak et al., 2013). However, traditional models based on data from visual and somatosensory cortex predict that ascending sensory thalamocortical (TC) pathways sequentially activate cells in layers 4 (L4), L2/3, and L5. The relationship between these two spatio-temporal activity patterns is unclear. Here, we used calcium imaging and electrophysiological recordings in murine auditory TC brain slices to investigate the laminar response pattern to stimulation of TC afferents. We show that although monosynaptically driven spiking in response to TC afferents occurs, the vast majority of spikes fired following TC stimulation occurs during brief UP states and outside the context of the L4>L2/3>L5 activation sequence. Specifically, monosynaptic subthreshold TC responses with similar latencies were observed throughout layers 2–6, presumably via synapses onto dendritic processes located in L3 and L4. However, monosynaptic spiking was rare, and occurred primarily in L4 and L5 non-pyramidal cells. By contrast, during brief, TC-induced UP states, spiking was dense and occurred primarily in pyramidal cells. These network events always involved infragranular layers, whereas involvement of supragranular layers was variable. During UP states, spike latencies were comparable between infragranular and supragranular cells. These data are consistent with a model in which activation of auditory cortex, especially supragranular layers, depends on internally generated network events that represent a non-linear amplification process, are initiated by infragranular cells and tightly regulated by feed-forward inhibitory
Helenius, Päivi; Salmelin, Riitta; Richardson, Ulla; Leinonen, Seija; Lyytinen, Heikki
Reading difficulties are associated with problems in processing and manipulating speech sounds. Dyslexic individuals seem to have, for instance, difficulties in perceiving the length and identity of consonants. Using magnetoencephalography (MEG), we characterized the spatio-temporal pattern of auditory cortical activation in dyslexia evoked by three types of natural bisyllabic pseudowords (/ata/, /atta/, and /a a/), complex nonspeech sound pairs (corresponding to /atta/ and /a a/) and simple 1-kHz tones. The most robust difference between dyslexic and non-reading-impaired adults was seen in the left supratemporal auditory cortex 100 msec after the onset of the vowel /a/. This N100m response was abnormally strong in dyslexic individuals. For the complex nonspeech sounds and tone, the N100m response amplitudes were similar in dyslexic and nonimpaired individuals. The responses evoked by syllable /ta/ of the pseudoword /atta/ also showed modest latency differences between the two subject groups. The responses evoked by the corresponding nonspeech sounds did not differ between the two subject groups. Further, when the initial formant transition, that is, the consonant, was removed from the syllable /ta/, the N100m latency was normal in dyslexic individuals. Thus, it appears that dyslexia is reflected as abnormal activation of the auditory cortex already 100 msec after speech onset, manifested as abnormal response strengths for natural speech and as delays for speech sounds containing rapid frequency transition. These differences between the dyslexic and nonimpaired individuals also imply that the N100m response codes stimulus-specific features likely to be critical for speech perception. Which features of speech (or nonspeech stimuli) are critical in eliciting the abnormally strong N100m response in dyslexic individuals should be resolved in future studies.
Wong, Carmen; Chabot, Nicole; Kok, Melanie A; Lomber, Stephen G
Cross-modal reorganization following the loss of input from a sensory modality can recruit sensory-deprived cortical areas to process information from the remaining senses. Specifically, in early-deaf cats, the anterior auditory field (AAF) is unresponsive to auditory stimuli but can be activated by somatosensory and visual stimuli. Similarly, AAF neurons respond to tactile input in adult-deafened animals. To examine anatomical changes that may underlie this functional adaptation following early or late deafness, afferent projections to AAF were examined in hearing cats, and cats with early- or adult-onset deafness. Unilateral deposits of biotinylated dextran amine were made in AAF to retrogradely label cortical and thalamic afferents to AAF. In early-deaf cats, ipsilateral neuronal labeling in visual and somatosensory cortices increased by 329% and 101%, respectively. The largest increases arose from the anterior ectosylvian visual area and the anterolateral lateral suprasylvian visual area, as well as somatosensory areas S2 and S4. Consequently, labeling in auditory areas was reduced by 36%. The age of deafness onset appeared to influence afferent connectivity, with less marked differences observed in late-deaf cats. Profound changes to visual and somatosensory afferent connectivity following deafness may reflect corticocortical rewiring affording acoustically deprived AAF with cross-modal functionality.
Brugge, John F; Volkov, Igor O; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Fenoy, Albert; Steinschneider, Mitchell; Howard, Matthew A
Averaged auditory evoked potentials (AEPs) to bilaterally presented 100 Hz click trains were recorded from multiple sites simultaneously within Heschl's gyrus (HG) and on the posterolateral surface of the superior temporal gyrus (STG) in epilepsy-surgery patients. Three auditory fields were identified based on AEP waveforms and their distribution. Primary (core) auditory cortex was localized to posteromedial HG. Here the AEP was characterized by a robust polyphasic low-frequency field potential having a short onset latency and on which was superimposed a smaller frequency-following response to the click train. Core AEPs exhibited the lowest response threshold and highest response amplitude at one HG site with threshold rising and amplitude declining systematically on either side of it. The AEPs recorded anterolateral to the core, if present, were typically of low amplitude, with little or no evidence of short-latency waves or the frequency-following response that characterized core AEPs. We suggest that this area is part of a lateral auditory belt system. Robust AEPs, with waveforms demonstrably different from those of the core or lateral belt, were localized to the posterolateral surface of the STG and conform to previously described field PLST.
Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H
Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.
Jeon, Eun Kyung; Chiou, Li-Kuei; Kirby, Benjamin; Karsten, Sue; Turner, Christopher; Abbas, Paul
Objective Nucleus Hybrid CI users hear low-frequency sounds via acoustic stimulation and high frequency sounds via electrical stimulation. This within-subject study compares three different methods of coordinating programming of the acoustic and electrical components of the Hybrid device. Speech perception and cortical auditory evoked potentials (CAEP) were used to assess differences in outcome. The goals of this study were to determine (1) if the evoked potential measures could predict which programming strategy resulted either in better outcome on the speech perception task or was preferred by the listener, and (2) whether CAEPs could be used to predict which subjects benefitted most from having access to the electrical signal provided by the Hybrid implant. Design CAEPs were recorded from 10 Nucleus Hybrid CI users. Study participants were tested using three different experimental MAPs that differed in terms of how much overlap there was between the range of frequencies processed by the acoustic component of the Hybrid device and range of frequencies processed by the electrical component. The study design included allowing participants to acclimatize for a period of up to 4 weeks with each experimental program prior to speech perception and evoked potential testing. Performance using the experimental MAPs was assessed using both a closed-set consonant recognition task and an adaptive test that measured the signal to noise ratio that resulted in 50% correct identification of a set of 12 spondees presented in background noise (SNR-50). Long-duration, synthetic vowels were used to record both the cortical P1-N1-P2 “onset” response and the auditory “change” or ACC response. Correlations between the evoked potential measures and performance on the speech perception tasks are reported. Results Differences in performance using the three programming strategies were not large. Peak-to-peak amplitude of the AAC response was not found to be sensitive enough to
Heinrichs-Graham, Elizabeth; Franzen, John D.; Knott, Nichole L.; White, Matthew L.; Wetzel, Martin W.; Wilson, Tony W.
The ability to attend to particular stimuli while ignoring others is crucial in goal-directed activities and has been linked with prefrontal cortical regions, including the dorsolateral prefrontal cortex (DLPFC). Both hyper- and hypo-activation in the DLPFC has been reported in patients with attention-deficit/hyperactivity disorder (ADHD) during many different cognitive tasks, but the network-level effects of such aberrant activity remain largely unknown. Using magnetoencephalography (MEG), we examined functional connectivity between regions of the DLPFC and the modality-specific auditory cortices during an auditory attention task in medicated and un-medicated adults with ADHD, and those without ADHD. Participants completed an attention task in two separate sessions (medicated/un-medicated), and each session consisted of two blocks (attend and no-attend). All MEG data were coregistered to structural MRI, corrected for head motion, and projected into source space. Subsequently, we computed the phase coherence (i.e., functional connectivity) between DLPFC regions and the auditory cortices. We found that un-medicated adults with ADHD exhibited greater phase coherence in the beta (14–30Hz) and gamma frequency (30–56 Hz) range in attend and no-attend conditions compared to controls. Stimulant medication attenuated these differences, but did not fully eliminate them. These results suggest that aberrant bottom-up processing may engulf executive resources in ADHD. PMID:24495532
Warrier, Catherine M; Johnson, Krista L; Hayes, Erin A; Nicol, Trent; Kraus, Nina
The physiological mechanisms that contribute to abnormal encoding of speech in children with learning problems are yet to be well understood. Furthermore, speech perception problems appear to be particularly exacerbated by background noise in this population. This study compared speech-evoked cortical responses recorded in a noisy background to those recorded in quiet in normal children (NL) and children with learning problems (LP). Timing differences between responses recorded in quiet and in background noise were assessed by cross-correlating the responses with each other. Overall response magnitude was measured with root-mean-square (RMS) amplitude. Cross-correlation scores indicated that 23% of LP children exhibited cortical neural timing abnormalities such that their neurophysiological representation of speech sounds became distorted in the presence of background noise. The latency of the N2 response in noise was isolated as being the root of this distortion. RMS amplitudes in these children did not differ from NL children, indicating that this result was not due to a difference in response magnitude. LP children who participated in a commercial auditory training program and exhibited improved cortical timing also showed improvements in phonological perception. Consequently, auditory pathway timing deficits can be objectively observed in LP children, and auditory training can diminish these deficits.
Teschner, Magnus J.; Seybold, Bryan A.; Malone, Brian J.; Hüning, Jana
The neural mechanisms that support the robust processing of acoustic signals in the presence of background noise in the auditory system remain largely unresolved. Psychophysical experiments have shown that signal detection is influenced by the signal-to-noise ratio (SNR) and the overall stimulus level, but this relationship has not been fully characterized. We evaluated the neural representation of frequency in rat primary auditory cortex by constructing tonal frequency response areas (FRAs) in primary auditory cortex for different SNRs, tone levels, and noise levels. We show that response strength and selectivity for frequency and sound level depend on interactions between SNRs and tone levels. At low SNRs, jointly increasing the tone and noise levels reduced firing rates and narrowed FRA bandwidths; at higher SNRs, however, increasing the tone and noise levels increased firing rates and expanded bandwidths, as is usually seen for FRAs obtained without background noise. These changes in frequency and intensity tuning decreased tone level and tone frequency discriminability at low SNRs. By contrast, neither response onset latencies nor noise-driven steady-state firing rates meaningfully interacted with SNRs or overall sound levels. Speech detection performance in humans was also shown to depend on the interaction between overall sound level and SNR. Together, these results indicate that signal processing difficulties imposed by high noise levels are quite general and suggest that the neurophysiological changes we see for simple sounds generalize to more complex stimuli. SIGNIFICANCE STATEMENT Effective processing of sounds in background noise is an important feature of the mammalian auditory system and a necessary feature for successful hearing in many listening conditions. Even mild hearing loss strongly affects this ability in humans, seriously degrading the ability to communicate. The mechanisms involved in achieving high performance in background noise are not
Carter, Lyndal; Dillon, Harvey; Seymour, John; Seeto, Mark; Van Dun, Bram
Previous studies have demonstrated that cortical auditory-evoked potentials (CAEPs) can be reliably elicited in response to speech stimuli in listeners wearing hearing aids. It is unclear, however, how close to the aided behavioral threshold (i.e., at what behavioral sensation level) a sound must be before a cortical response can reliably be detected. The purpose of this study was to systematically examine the relationship between CAEP detection and the audibility of speech sounds (as measured behaviorally), when the listener is wearing a hearing aid fitted to prescriptive targets. A secondary aim was to investigate whether CAEP detection is affected by varying the frequency emphasis of stimuli, so as to simulate variations to the prescribed gain-frequency response of a hearing aid. The results have direct implications for the evaluation of hearing aid fittings in nonresponsive adult clients, and indirect implications for the evaluation of hearing aid fittings in infants. Participants wore hearing aids while listening to speech sounds presented in a sound field. Aided thresholds were measured, and cortical responses evoked, under a range of stimulus conditions. The presence or absence of CAEPs was determined by an automated statistic. Participants were adults (6 females and 4 males). Participants had sensorineural hearing loss ranging from mild to severe-profound in degree. Participants' own hearing aids were replaced with a test hearing aid, with linear processing, during assessments. Pure-tone thresholds and hearing aid gain measurements were obtained, and a theoretical prediction of speech stimulus audibility for each participant (similar to those used for audibility predictions in infant hearing aid fittings) was calculated. Three speech stimuli, (/m/, /t/, and /g/) were presented aided (monaurally, nontest ear occluded), free field, under three conditions (+4 dB/octave, -4 dB/octave, and without filtering), at levels of 40, 50, and 60 dB SPL (measured for the
Papesh, Melissa A.; Billings, Curtis J.; Baltzell, Lucas S.
Objective To use cortical auditory evoked potentials (CAEPs) to understand neural encoding in background noise and the conditions under which noise enhances CAEP responses. Methods CAEPs from 16 normal-hearing listeners were recorded using the speech syllable/ba/presented in quiet and speech-shaped noise at signal-to-noise ratios of 10 and 30 dB. The syllable was presented binaurally and monaurally at two presentation rates. Results The amplitudes of N1 and N2 peaks were often significantly enhanced in the presence of low-level background noise relative to quiet conditions, while P1 and P2 amplitudes were consistently reduced in noise. P1 and P2 amplitudes were significantly larger during binaural compared to monaural presentations, while N1 and N2 peaks were similar between binaural and monaural conditions. Conclusions Methodological choices impact CAEP peaks in very different ways. Negative peaks can be enhanced by background noise in certain conditions, while positive peaks are generally enhanced by binaural presentations. Significance Methodological choices significantly impact CAEPs acquired in quiet and in noise. If CAEPs are to be used as a tool to explore signal encoding in noise, scientists must be cognizant of how differences in acquisition and processing protocols selectively shape CAEP responses. PMID:25453611
Van Dun, Bram; Kania, Anna; Dillon, Harvey
Cortical auditory evoked potentials (CAEPs) are influenced by the characteristics of the stimulus, including level and hearing aid gain. Previous studies have measured CAEPs aided and unaided in individuals with normal hearing. There is a significant difference between providing amplification to a person with normal hearing and a person with hearing loss. This study investigated this difference and the effects of stimulus signal-to-noise ratio (SNR) and audibility on the CAEP amplitude in a population with hearing loss. Twelve normal-hearing participants and 12 participants with a hearing loss participated in this study. Three speech sounds—/m/, /g/, and /t/—were presented in the free field. Unaided stimuli were presented at 55, 65, and 75 dB sound pressure level (SPL) and aided stimuli at 55 dB SPL with three different gains in steps of 10 dB. CAEPs were recorded and their amplitudes analyzed. Stimulus SNRs and audibility were determined. No significant effect of stimulus level or hearing aid gain was found in normal hearers. Conversely, a significant effect was found in hearing-impaired individuals. Audibility of the signal, which in some cases is determined by the signal level relative to threshold and in other cases by the SNR, is the dominant factor explaining changes in CAEP amplitude. CAEPs can potentially be used to assess the effects of hearing aid gain in hearing-impaired users. PMID:27587919
Neuhoff, John G.; Bilecen, Deniz; Mustovic, Henrietta; Schachinger, Hartmut; Seifritz, Erich; Scheffler, Klaus; di Salle, Francesco
Relative motion between a sound source and a listener creates a change in acoustic intensity that can be used to anticipate the source's approach. Humans have been shown to overestimate the intensity change of rising compared to falling intensity sounds and underestimate the time-to-contact of approaching sound sources. From an evolutionary perspective, this perceptual priority for looming sounds may represent an adaptive advantage that provides an increased margin of safety for responding to approaching auditory objects. Here, using functional magnetic resonance imaging, we show that the prioritization of rising contrasted with falling intensity sine-tones is grounded in a specific neural network. This network is predominantly composed of the superior temporal sulci, the middle temporal gyri, the right temporo-parietal junction, the motor and premotor cortices mainly on the right hemisphere, the left frontal operculum, and the left superior posterior cerebellar cortex. These regions are critical for the allocation of attention, the analysis of space, object recognition, and neurobehavioral preparation for action. Our results identify a widespread neural network underpinning the perceptual priority for looming sounds that can be used in translating sensory information into preparedness for adverse events and appropriate action. [Work supported by the Swiss and the American NSFs.
Naie, Katja; Hahnloser, Richard H R
In the process of song learning, songbirds such as the zebra finch shape their initial soft and poorly formed vocalizations (subsong) first into variable plastic songs with a discernable recurring motif and then into highly stereotyped adult songs. A premotor brain area critically involved in plastic and adult song production is the cortical nucleus HVC. One of HVC's primary afferents, the nucleus interface of the nidopallium (NIf), provides a significant source of auditory input to HVC. However, the premotor involvement of NIf has not been extensively studied yet. Here we report that brief and reversible pharmacological inactivation of NIf in juvenile birds leads to transient degradation of plastic song toward subsong, as revealed by spectral and temporal song features. No such song degradation is seen following NIf inactivation in adults. However, in both juveniles and adults NIf inactivation leads to a transient decrease in song stereotypy. Our findings reveal a contribution of NIf to song production in juveniles that agrees with its known role in adults in mediating thalamic drive to downstream vocal motor areas during sleep.
Cone, Barbara; Whitaker, Richard
Cortical auditory evoked potentials (CAEPs) to tones and speech sounds were obtained in infants to: (1) further knowledge of auditory development above the level of the brainstem during the first year of life; (2) establish CAEP input-output functions for tonal and speech stimuli as a function of stimulus level and (3) elaborate the data-base that establishes CAEP in infants tested while awake using clinically relevant stimuli, thus providing methodology that would have translation to pediatric audiological assessment. Hypotheses concerning CAEP development were that the latency and amplitude input-output functions would reflect immaturity in encoding stimulus level. In a second experiment, infants were tested with the same stimuli used to evoke the CAEPs. Thresholds for these stimuli were determined using observer-based psychophysical techniques. The hypothesis was that the behavioral thresholds would be correlated with CAEP input-output functions because of shared cortical response areas known to be active in sound detection. 36 infants, between the ages of 4 and 12 months (mean=8 months, s.d.=1.8 months) and 9 young adults (mean age 21 years) with normal hearing were tested. First, CAEPs amplitude and latency input-output functions were obtained for 4 tone bursts and 7 speech tokens. The tone bursts stimuli were 50 ms tokens of pure tones at 0.5, 1.0, 2.0 and 4.0 kHz. The speech sound tokens, /a/, /i/, /o/, /u/, /m/, /s/, and /∫/, were created from natural speech samples and were also 50 ms in duration. CAEPs were obtained for tone burst and speech token stimuli at 10 dB level decrements in descending order from 70 dB SPL. All CAEP tests were completed while the infants were awake and engaged in quiet play. For the second experiment, observer-based psychophysical methods were used to establish perceptual threshold for the same speech sound and tone tokens. Infant CAEP component latencies were prolonged by 100-150 ms in comparison to adults. CAEP latency
Cone, Barbara; Whitaker, Richard
Objectives Cortical auditory evoked potentials (CAEPs) to tones and speech sounds were obtained in infants to: 1) further knowledge of auditory development above the level of the brainstem during the first year of life; 2) establish CAEP input-output functions for tonal and speech stimuli as a function of stimulus level and to 3) elaborate the data-base that establishes CAEP in infants tested while awake using clinically relevant stimuli, thus providing methodology that would have translation to pediatric audiological assessment. Hypotheses concerning CAEP development were that the latency and amplitude input-output functions would reflect immaturity in encoding stimulus level. In a second experiment, infants were tested with the same stimuli used to evoke the CAEPs. Thresholds for these stimuli were determined using observer-based psychophysical techniques. The hypothesis was that the behavioral thresholds would be correlated with CAEP input-output functions because of shared cortical response areas known to be active in sound detection. Design 36 infants, between the ages of 4-12 months (mean= 8 months, s.d.=1.8 months) and 9 young adults (mean age 21 years) with normal hearing were tested. First, CAEPs amplitude and latency input-output functions were obtained for 4 tone bursts and 7 speech tokens. The tone bursts stimuli were 50 ms tokens of pure tones at 0.5, 1.0, 2.0 and 4.0 kHz. The speech sound tokens, /a/, /i/, /o/, /u/, /m/, /s/, and /∫/, were created from natural speech samples and were also 50 ms in duration. CAEPs were obtained for tone burst and speech token stimuli at 10 dB level decrements in descending order from 70 dB SPL. All CAEP tests were completed while the infants were awake and engaged in quiet play. For the second experiment, observer-based psychophysical methods were used to establish perceptual threshold for the same speech sound and tone tokens. Results Infant CAEP component latencies were prolonged by 100-150 ms in comparison to
Sonntag, Mandy; Blosa, Maren; Schmidt, Sophie; Rübsamen, Rudolf; Morawski, Markus
Perineuronal nets (PNs) are a unique and complex meshwork of specific extracellular matrix molecules that ensheath a subset of neurons in many regions of the central nervous system (CNS). PNs appear late in development and are supposed to restrict synaptic plasticity and to stabilize functional neuronal connections. PNs were further hypothesized to create a charged milieu around the neurons and thus, might directly modulate synaptic activity. Although PNs were first described more than 120 years ago, their exact functions still remain elusive. The purpose of the present review is to propose the nuclei of the auditory system, which are highly enriched in PN-wearing neurons, as particularly suitable structures to study the functional significance of PNs. We provide a detailed description of the distribution of PNs from the cochlear nucleus to the auditory cortex considering distinct markers for detection of PNs. We further point to the suitability of specific auditory neurons to serve as promising model systems to study in detail the contribution of PNs to synaptic physiology and also more generally to the functionality of the brain.
Shulman, A; Strashun, A
The cerebellum and the descending auditory system (DAS) are considered clinically significant for influencing the development of the clinical course of tinnitus of the severe disabling type. It is hypothesized that the SPECT of Brain perfusion asymmetries in cerebellum, demonstrated since 1993, reflect clinically the influence of an aberrant auditory stimulus i.e. tinnitus, on the activity and function of the descending auditory system highlighted by the cerebellum and the acousticomotor systems. SPECT of Brain perfusion asymmetries in the cerebellum have been demonstrated in 60-70% of tinnitus patients of the central type. Electrophysiologic support for this finding includes interference in ocular fixation suppression of the vestibulocular (VOR) with rotation and position testing. Abnormalities in cerebellar function are considered to reflect the psychomotor component of tinnitus. Support for the hypothesis is demonstrated with one patient with a predominantly central type tinnitus of the severe disabling type with cerebellar perfusion asymmetries and associated electrophysiologic evidence of interference in the VOR with rotation testing.
Wang, Han Chin; Bergles, Dwight E
Spontaneous electrical activity is a common feature of sensory systems during early development. This sensory-independent neuronal activity has been implicated in promoting their survival and maturation, as well as growth and refinement of their projections to yield circuits that can rapidly extract information about the external world. Periodic bursts of action potentials occur in auditory neurons of mammals before hearing onset. This activity is induced by inner hair cells (IHCs) within the developing cochlea, which establish functional connections with spiral ganglion neurons (SGNs) several weeks before they are capable of detecting external sounds. During this pre-hearing period, IHCs fire periodic bursts of Ca(2+) action potentials that excite SGNs, triggering brief but intense periods of activity that pass through auditory centers of the brain. Although spontaneous activity requires input from IHCs, there is ongoing debate about whether IHCs are intrinsically active and their firing periodically interrupted by external inhibitory input (IHC-inhibition model), or are intrinsically silent and their firing periodically promoted by an external excitatory stimulus (IHC-excitation model). There is accumulating evidence that inner supporting cells in Kölliker's organ spontaneously release ATP during this time, which can induce bursts of Ca(2+) spikes in IHCs that recapitulate many features of auditory neuron activity observed in vivo. Nevertheless, the role of supporting cells in this process remains to be established in vivo. A greater understanding of the molecular mechanisms responsible for generating IHC activity in the developing cochlea will help reveal how these events contribute to the maturation of nascent auditory circuits.
Twomey, Tae; Waters, Dafydd; Price, Cathy J; Evans, Samuel; MacSweeney, Mairéad
To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects.Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants.SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was
Harris, Kelly C; Vaden, Kenneth I; Dubno, Judy R
The N1-P2 is an obligatory cortical response that can reflect the representation of spectral and temporal characteristics of an auditory stimulus. Traditionally,mean amplitudes and latencies of the prominent peaks in the averaged response are compared across experimental conditions. Analyses of the peaks in the averaged response only reflect a subset of the data contained within the electroencephalogram(EEG) signal. We used single-trial analyses techniques to identify the contribution of brain noise,neural synchrony, and spectral power to the generation of P2 amplitude and how these variables may change across age group. This information is important for appropriate interpretation of event-related potentials (ERPs) results and in understanding of age-related neural pathologies. EEG was measured from 25 younger and 25 older normal hearing adults. Age-related and individual differences in P2 response amplitudes, and variability in brain noise, phase locking value (PLV), and spectral power (4-8 Hz) were assessed from electrode FCz. Model testing and linear regression were used to determine the extent to which brain noise, PLV, and spectral power uniquely predicted P2 amplitudes and varied by age group. Younger adults had significantly larger P2 amplitudes, PLV, and power compared to older adults. Brain noise did not differ between age groups. The results of regression testing revealed that brain noise and PLV, but not spectral power were unique predictors of P2 amplitudes. Model fit was significantly better in younger than in older adults. ERP analyses are intended to provide a better understanding of the underlying neural mechanisms that contribute to individual and group differences in behavior. The current results support that age-related declines in neural synchrony contribute to smaller P2 amplitudes in older normal hearing adults. Based on our results, we discuss potential models in which differences in neural synchrony and brain noise can account for
Kleber, Boris; Veit, Ralf; Moll, Christina Valérie; Gaser, Christian; Birbaumer, Niels; Lotze, Martin
In contrast to instrumental musicians, professional singers do not train on a specific instrument but perfect a motor system that has already been extensively trained during speech motor development. Previous functional imaging studies suggest that experience with singing is associated with enhanced somatosensory-based vocal motor control. However, experience-dependent structural plasticity in vocal musicians has rarely been studied. We investigated voxel-based morphometry (VBM) in 27 professional classical singers and compared gray matter volume in regions of the "singing-network" to an age-matched group of 28 healthy volunteers with no special singing experience. We found right hemispheric volume increases in professional singers in ventral primary somatosensory cortex (larynx S1) and adjacent rostral supramarginal gyrus (BA40), as well as in secondary somatosensory (S2) and primary auditory cortices (A1). Moreover, we found that earlier commencement with vocal training correlated with increased gray-matter volume in S1. However, in contrast to studies with instrumental musicians, this correlation only emerged in singers who began their formal training after the age of 14years, when speech motor development has reached its first plateau. Structural data thus confirm and extend previous functional reports suggesting a pivotal role of somatosensation in vocal motor control with increased experience in singing. Results furthermore indicate a sensitive period for developing additional vocal skills after speech motor coordination has matured.
Rauschecker, Josef P.
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177
Bellier, Ludovic; Bouchet, Patrick; Jeanvoine, Arnaud; Valentin, Olivier; Thai-Van, Hung; Caclin, Anne
Topographies of speech auditory brainstem response (speech ABR), a fine electrophysiological marker of speech encoding, have never been described. Yet, they could provide useful information to assess speech ABR generators and better characterize populations of interest (e.g., musicians, dyslexics). We present here a novel methodology of topographic speech ABR recording, using a 32-channel low sampling rate (5 kHz) EEG system. Quality of speech ABRs obtained with this conventional multichannel EEG system were compared to that of signals simultaneously recorded with a high sampling rate (13.3 kHz) EEG system. Correlations between speech ABRs recorded with the two systems revealed highly similar signals, without any significant difference between their signal-to-noise ratios (SNRs). Moreover, an advanced denoising method for multichannel data (denoising source separation) significantly improved SNR and allowed topography of speech ABR to be recovered. Copyright © 2014 Society for Psychophysiological Research.
Fan, Wenliang; Zhang, Wenjuan; Li, Jing; Zhao, Xueyan; Mella, Grace; Lei, Ping; Liu, Yuan; Wang, Haha; Cheng, Huamao; Shi, Hong; Xu, Haibo
To investigate the cerebral gray matter volume alterations in unilateral sudden sensorineural hearing loss patients within the acute period by the voxel-based morphometry method, and to determine if hearing impairment is associated with regional gray matter alterations in unilateral sudden sensorineural hearing loss patients. Prospective case study. Tertiary class A teaching hospital. Thirty-nine patients with left-side unilateral sudden sensorineural hearing loss and 47 patients with right-side unilateral sudden sensorineural hearing loss. Diagnostic. To compare the regional gray matter of unilateral sudden sensorineural hearing loss patients and healthy control participants. Compared with control groups, patients with left side unilateral sudden sensorineural hearing loss had significant gray matter reductions in the right middle temporal gyrus and right superior temporal gyrus, whereas patients with right side unilateral sudden sensorineural hearing loss showed gray matter decreases in the left superior temporal gyrus and left middle temporal gyrus. A significant negative correlation with the duration of the sudden sensorineural hearing loss (R = -0.427, p = 0.012 for left-side unilateral SSNHL and R = -0.412, p = 0.013 for right-side unilateral SSNHL) was also found in these brain areas. There was no region with increased gray matter found in both groups of unilateral sudden sensorineural hearing loss patients. This study confirms that detectable decreased contralateral auditory cortical morphological changes have occurred in unilateral SSNHL patients within the acute period by voxel-based morphometry methods. The gray matter volumes of these brain areas also perform a negative correlation with the duration of the disease, which suggests a gradual brain structural impairment after the progression of the disease.
Kong, Ying-Yee; Somarowthu, Ala; Ding, Nai
This study investigates the effect of spectral degradation on cortical speech encoding in complex auditory scenes. Young normal-hearing listeners were simultaneously presented with two speech streams and were instructed to attend to only one of them. The speech mixtures were subjected to noise-channel vocoding to preserve the temporal envelope and degrade the spectral information of speech. Each subject was tested with five spectral resolution conditions (unprocessed speech, 64-, 32-, 16-, and 8-channel vocoder conditions) and two target-to-masker ratio (TMR) conditions (3 and 0 dB). Ongoing electroencephalographic (EEG) responses and speech comprehension were measured in each spectral and TMR condition for each subject. Neural tracking of each speech stream was characterized by cross-correlating the EEG responses with the envelope of each of the simultaneous speech streams at different time lags. Results showed that spectral degradation and TMR both significantly influenced how top-down attention modulated the EEG responses to the attended and unattended speech. That is, the EEG responses to the attended and unattended speech streams differed more for the higher (unprocessed, 64 ch, and 32 ch) than the lower (16 and 8 ch) spectral resolution conditions, as well as for the higher (3 dB) than the lower TMR (0 dB) condition. The magnitude of differential neural modulation responses to the attended and unattended speech streams significantly correlated with speech comprehension scores. These results suggest that severe spectral degradation and low TMR hinder speech stream segregation, making it difficult to employ top-down attention to differentially process different speech streams.
Ghazaleh, Naghmeh; Zwaag, Wietske van der; Clarke, Stephanie; Ville, Dimitri Van De; Maire, Raphael; Saenz, Melissa
Animal models of hearing loss and tinnitus observe pathological neural activity in the tonotopic frequency maps of the primary auditory cortex. Here, we applied ultra high-field fMRI at 7 T to test whether human patients with unilateral hearing loss and tinnitus also show altered functional activity in the primary auditory cortex. The high spatial resolution afforded by 7 T imaging allowed tonotopic mapping of primary auditory cortex on an individual subject basis. Eleven patients with unilateral hearing loss and tinnitus were compared to normal-hearing controls. Patients showed an over-representation and hyperactivity in a region of the cortical map corresponding to low frequencies sounds, irrespective of the hearing loss and tinnitus range, which in most cases affected higher frequencies. This finding of hyperactivity in low frequency map regions, irrespective of hearing loss range, is consistent with some previous studies in animal models and corroborates a previous study of human tinnitus. Thus these findings contribute to accumulating evidence that gross cortical tonotopic map reorganization is not a causal factor of tinnitus.
Irvine, Dexter R. F.
This article discusses findings concerning the plasticity of auditory cortical processing mechanisms in adults, including the effects of restricted cochlear damage or behavioral training with acoustic stimuli on the frequency selectivity of auditory cortical neurons and evidence for analogous injury- and use-related plasticity in the adult human…
Butler, Blake E; Chabot, Nicole; Lomber, Stephen G
Following sensory loss, compensatory crossmodal reorganization occurs such that the remaining modalities are functionally enhanced. For example, behavioral evidence suggests that peripheral visual localization is better in deaf than in normal hearing animals, and that this enhancement is mediated by recruitment of the posterior auditory field (PAF), an area that is typically involved in localization of sounds in normal hearing animals. To characterize the anatomical changes that underlie this phenomenon, we identified the thalamic and cortical projections to the PAF in hearing cats and those with early- and late-onset deafness. The retrograde tracer biotinylated dextran amine was deposited in the PAF unilaterally, to label cortical and thalamic afferents. Following early deafness, there was a significant decrease in callosal projections from the contralateral PAF. Late-deaf animals showed small-scale changes in projections from one visual cortical area, the posterior ectosylvian field (EPp), and the multisensory zone (MZ). With the exception of these minor differences, connectivity to the PAF was largely similar between groups, with the principle projections arising from the primary auditory cortex (A1) and the ventral division of the medial geniculate body (MGBv). This absence of large-scale connectional change suggests that the functional reorganization that follows sensory loss results from changes in synaptic strength and/or unmasking of subthreshold intermodal connections. J. Comp. Neurol. 524:3042-3063, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
van Kemenade, Bianca M.; Arikan, B. Ezgi; Fiehler, Katja; Leube, Dirk T.; Harris, Laurence R.; Kircher, Tilo
Predictive mechanisms are essential to successfully interact with the environment and to compensate for delays in the transmission of neural signals. However, whether and how we predict multisensory action outcomes remains largely unknown. Here we investigated the existence of multisensory predictive mechanisms in a context where actions have outcomes in different modalities. During fMRI data acquisition auditory, visual and auditory-visual stimuli were presented in active and passive conditions. In the active condition, a self-initiated button press elicited the stimuli with variable short delays (0-417ms) between action and outcome, and participants had to detect the presence of a delay for auditory or visual outcome (task modality). In the passive condition, stimuli appeared automatically, and participants had to detect the number of stimulus modalities (unimodal/bimodal). For action consequences compared to identical but unpredictable control stimuli we observed suppression of the blood oxygen level depended (BOLD) response in a broad network including bilateral auditory and visual cortices. This effect was independent of task modality or stimulus modality and strongest for trials where no delay was detected (undetected
Berman, Jeffrey I; Edgar, James C; Blaskey, Lisa; Kuschner, Emily S; Levy, Susan E; Ku, Matthew; Dell, John; Roberts, Timothy P L
Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD). The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG) measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Evaluable neuroimaging data was obtained from 44 typically developing (TD) children (mean age 10.4 ± 2.4 years) and 95 children with ASD (mean age 10.2 ± 2.6 years). Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1) superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2) auditory vowel-contrast mismatch field (MMF) latency as a passive probe of early linguistic processes. Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. RESULTS also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Present findings suggest atypical development of primary auditory as well as
Meredith, M. Alex; Clemo, H. Ruth; Corley, Sarah B.; Chabot, Nicole; Lomber, Stephen G.
Early hearing loss leads to crossmodal plasticity in regions of the cerebrum that are dominated by acoustical processing in hearing subjects. Until recently, little has been known of the connectional basis of this phenomenon. One region whose crossmodal properties are well-established is the auditory field of the anterior ectosylvian sulcus (FAES) in the cat, where neurons are normally responsive to acoustic stimulation and its deactivation leads to the behavioral loss of accurate orienting toward auditory stimuli. However, in early-deaf cats, visual responsiveness predominates in the FAES and its deactivation blocks accurate orienting behavior toward visual stimuli. For such crossmodal reorganization to occur, it has been presumed that novel inputs or increased projections from non-auditory cortical areas must be generated, or that existing non-auditory connections were ‘unmasked.’ These possibilities were tested using tracer injections into the FAES of adult cats deafened early in life (and hearing controls), followed by light microscopy to localize retrogradely labeled neurons. Surprisingly, the distribution of cortical and thalamic afferents to the FAES was very similar among early-deaf and hearing animals. No new visual projection sources were identified and visual cortical connections to the FAES were comparable in projection proportions. These results support an alternate theory for the connectional basis for cross-modal plasticity that involves enhanced local branching of existing projection terminals that originate in non-auditory as well as auditory cortices. PMID:26724756
Itoh, Kosuke; Okumiya-Kanke, Yoko; Nakayama, Yoh; Kwee, Ingrid L; Nakada, Tsutomu
The effects of musical training on the early auditory cortical response to pitch transitions in music were investigated by use of the change-N1 component of auditory event-related potentials. Musicians and non-musicians were presented with music stimuli comprising a melody and a harmony under various listening conditions. First, when the subjects played a video game and were instructed to ignore the auditory stimuli, the onset of stimuli elicited a typical, fronto-central onset-N1, whereas melodic and harmonic pitch transitions within the stimuli elicited so-called change-N1s that were more posterior in scalp distribution. The pitch transition change-N1s, but not onset-N1, were enhanced in musicians. Second, when the listeners attended to the same stimuli as above to detect infrequently occurring target stimuli, the change-N1 elicited by pitch changes (in non-target stimuli) was augmented, in non-musicians only when the target was easily detectable, and in both musicians and non-musicians when it was difficult to detect. Thus, the early, obligatory cortical response to pitch transitions during passive listening was chronically enhanced by training in musicians, and, reflecting this training-induced enhancement, the task-related modulation of this response was also different between musicians and non-musicians. These results are the first to demonstrate the long-term effects of training, short-term effects of task and the effects of their interaction on the early (~100-ms) cortical processing of pitch transitions in music. The scalp distributions of these enhancement effects were generally right dominant at temporal electrode sites, suggesting contributions from the radially oriented subcomponent of change-N1, namely, the Tb (N1c) wave of the T-complex. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Fujimoto, So; Komura, Yutaka
Brodmann areas 41 and 42 are located in the superior temporal gyrus and regarded as auditory cortices. The fundamental function in audition is frequency analysis; however, the findings on tonotopy maps of the human auditory cortex were not unified until recently when they were compared to the findings on inputs and outputs of the monkey auditory cortex. The auditory cortex shows plasticity after conditioned learning and surgery of cochlear implant. It is also involved in speech perception, music appreciation, and auditory hallucination in schizophrenia through interactions with other brain areas, such as the thalamus, frontal cortex, and limbic systems.
Badcock, Nicholas A; Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
. Conclusions. Our findings suggest that the gaming EEG system may prove a valid alternative to laboratory ERP systems for recording reliable late auditory ERPs (P1, N1, P2, N2, and the P3) over the frontal cortices. In the future, the gaming EEG system may also prove useful for measuring less reliable ERPs, such as the MMN, if the reliability of such ERPs can be boosted to the same level as late auditory ERPs.
Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
. Our findings suggest that the gaming EEG system may prove a valid alternative to laboratory ERP systems for recording reliable late auditory ERPs (P1, N1, P2, N2, and the P3) over the frontal cortices. In the future, the gaming EEG system may also prove useful for measuring less reliable ERPs, such as the MMN, if the reliability of such ERPs can be boosted to the same level as late auditory ERPs. PMID:23638374
Fujioka, Takako; Ross, Bernhard; Kakigi, Ryusuke; Pantev, Christo; Trainor, Laurel J.
Auditory evoked responses to a violin tone and a noise-burst stimulus were recorded from 4- to 6-year-old children in four repeated measurements over a 1-year period using magnetoencephalography (MEG). Half of the subjects participated in musical lessons throughout the year; the other half had no music lessons. Auditory evoked magnetic fields…
Fujioka, Takako; Ross, Bernhard; Kakigi, Ryusuke; Pantev, Christo; Trainor, Laurel J.
Auditory evoked responses to a violin tone and a noise-burst stimulus were recorded from 4- to 6-year-old children in four repeated measurements over a 1-year period using magnetoencephalography (MEG). Half of the subjects participated in musical lessons throughout the year; the other half had no music lessons. Auditory evoked magnetic fields…
Ten Oever, Sanne; Schroeder, Charles E; Poeppel, David; van Atteveldt, Nienke; Mehta, Ashesh D; Mégevand, Pierre; Groppe, David M; Zion-Golumbic, Elana
Many environmental stimuli contain temporal regularities, a feature which can help predict forthcoming input. Phase-locking (entrainment) of ongoing low-frequency neuronal oscillations to rhythmic stimuli is proposed as a potential mechanism for enhancing neuronal responses and perceptual sensitivity, by aligning high-excitability phases to events within a stimulus stream. Previous experiments show that rhythmic structure has a behavioral benefit even when the rhythm itself is below perceptual detection thresholds (Ten Oever et al., 2014). It is not known whether this "inaudible" rhythmic sound stream also induces entrainment. Here we tested this hypothesis using magnetoencephalography (MEG) and electrocorticography (ECoG) in humans to record changes in neuronal activity as subthreshold rhythmic stimuli gradually became audible. We found that significant phase-locking to the rhythmic sounds preceded participants' detection of them. Moreover, no significant auditory-evoked responses accompanied this pre-threshold entrainment. These auditory-evoked responses, distinguished by robust, broad-band increases in inter-trial coherence (ITC), only appeared after sounds were reported as audible. Taken together with the reduced perceptual thresholds observed for rhythmic sequences, these findings support the proposition that entrainment of low-frequency oscillations serves a mechanistic role in enhancing perceptual sensitivity for temporally-predictive sounds. This framework has broad implications for understanding the neural mechanisms involved in generating temporal predictions and their relevance for perception, attention, and awareness.SIGNIFICANCE STATEMENTThe environment is full of rhythmically structured signals that the nervous system can exploit for information processing. Thus it is important to understand how the brain processes such temporally structured, regular features of external stimuli. Here we report the alignment of slowly fluctuating oscillatory brain
Reale, R A; Brugge, J F
1. The interaural-phase-difference (IPD) sensitivity of single neurons in the primary auditory (AI) cortex of the anesthetized cat was studied at stimulus frequencies ranging from 120 to 2,500 Hz. Best frequencies of the 43 AI cells sensitive to IPD ranged from 190 to 2,400 Hz. 2. A static IPD was produced when a pair of low-frequency tone bursts, differing from one another only in starting phase, were presented dichotically. The resulting IPD-sensitivity curves, which plot the number of discharges evoked by the binaural signal as a function of IPD, were deeply modulated circular functions. IPD functions were analyzed for their mean vector length (r) and mean interaural phase (phi). Phase sensitivity was relatively independent of best frequency (BF) but highly dependent on stimulus frequency. Regardless of BF or stimulus frequency within the excitatory response area the majority of cells fired maximally when the ipsilateral tone lagged the contralateral signal and fired least when this interaural-phase relationship was reversed. 3. Sensitivity to continuously changing IPD was studied by delivering to the two ears 3-s tones that differed slightly in frequency, resulting in a binaural beat. Approximately 26% of the cells that showed a sensitivity to static changes in IPD also showed a sensitivity to dynamically changing IPD created by this binaural tonal combination. The discharges were highly periodic and tightly synchronized to a particular phase of the binaural beat cycle. High synchrony can be attributed to the fact that cortical neurons typically respond to an excitatory stimulus with but a single spike that is often precisely timed to stimulus onset. A period histogram, binned on the binaural beat frequency (fb), produced an equivalent IPD-sensitivity function for dynamically changing interaural phase. For neurons sensitive to both static and continuously changing interaural phase there was good correspondence between their static (phi s) and dynamic (phi d
Reuss, Stefan; Banica, Ovidiu; Elgurt, Mirra; Mitz, Stephanie; Disque-Kaiser, Ursula; Riemann, Randolf; Hill, Marco; Jaquish, Dawn V.; Koehrn, Fred J.; Burmester, Thorsten; Hankeln, Thomas; Woolf, Nigel K.
The energy-yielding pathways that provide the large amounts of metabolic energy required by inner ear sensorineural cells are poorly understood. Neuroglobin (Ngb) is a neuron-specific hemoprotein of the globin family, which is suggested to be involved in oxidative energy metabolism. Here we present quantitative real-time reverse transcription PCR, in situ hybridization, immunohistochemical and Western blot evidence that neuroglobin is highly expressed in the mouse and rat cochlea. For primary cochlea neurons, Ngb expression is limited to the subpopulation of type I spiral ganglion cells, those which innervate inner hair cells, while the subpopulation of type II spiral ganglion cells which innervate the outer hair cells do not express Ngb. We further investigated Ngb distribution in rat, mouse and human auditory brainstem centers, and found that the cochlear nuclei and superior olivary complex (SOC) also express considerable amounts of Ngb. Notably, the majority of olivocochlear neurons, those which provide efferent innervation of outer hair cells as identified by neuronal tract tracing, were Ngb-immunoreactive. We also observed that neuroglobin in the SOC frequently co-localized with neuronal nitric oxide synthase, the enzyme responsible for nitric oxide production. Our findings suggest that neuroglobin is well positioned to play an important physiologic role in the oxygen homeostasis of the peripheral and central auditory nervous system, and provides the first evidence that Ngb signal differentiates the central projections of the inner and outer hair cells. PMID:25636685
Reuss, Stefan; Banica, Ovidiu; Elgurt, Mirra; Mitz, Stephanie; Disque-Kaiser, Ursula; Riemann, Randolf; Hill, Marco; Jaquish, Dawn V; Koehrn, Fred J; Burmester, Thorsten; Hankeln, Thomas; Woolf, Nigel K
The energy-yielding pathways that provide the large amounts of metabolic energy required by inner ear sensorineural cells are poorly understood. Neuroglobin (Ngb) is a neuron-specific hemoprotein of the globin family, which is suggested to be involved in oxidative energy metabolism. Here, we present quantitative real-time reverse transcription PCR, in situ hybridization, immunohistochemical, and Western blot evidence that neuroglobin is highly expressed in the mouse and rat cochlea. For primary cochlea neurons, Ngb expression is limited to the subpopulation of type I spiral ganglion cells, those which innervate inner hair cells, while the subpopulation of type II spiral ganglion cells which innervate the outer hair cells do not express Ngb. We further investigated Ngb distribution in rat, mouse, and human auditory brainstem centers, and found that the cochlear nuclei and superior olivary complex (SOC) also express considerable amounts of Ngb. Notably, the majority of olivocochlear neurons, those which provide efferent innervation of outer hair cells as identified by neuronal tract tracing, were Ngb-immunoreactive. We also observed that neuroglobin in the SOC frequently co-localized with neuronal nitric oxide synthase, the enzyme responsible for nitric oxide production. Our findings suggest that neuroglobin is well positioned to play an important physiologic role in the oxygen homeostasis of the peripheral and central auditory nervous system, and provides the first evidence that Ngb signal differentiates the central projections of the inner and outer hair cells.
Barlow, Nathan; Purdy, Suzanne C; Sharma, Mridula; Giles, Ellen; Narne, Vijay
This study investigated whether a short intensive psychophysical auditory training program is associated with speech perception benefits and changes in cortical auditory evoked potentials (CAEPs) in adult cochlear implant (CI) users. Ten adult implant recipients trained approximately 7 hours on psychophysical tasks (Gap-in-Noise Detection, Frequency Discrimination, Spectral Rippled Noise [SRN], Iterated Rippled Noise, Temporal Modulation). Speech performance was assessed before and after training using Lexical Neighborhood Test (LNT) words in quiet and in eight-speaker babble. CAEPs evoked by a natural speech stimulus /baba/ with varying syllable stress were assessed pre- and post-training, in quiet and in noise. SRN psychophysical thresholds showed a significant improvement (78% on average) over the training period, but performance on other psychophysical tasks did not change. LNT scores in noise improved significantly post-training by 11% on average compared with three pretraining baseline measures. N1P2 amplitude changed post-training for /baba/ in quiet (p = 0.005, visit 3 pretraining versus visit 4 post-training). CAEP changes did not correlate with behavioral measures. CI recipients' clinical records indicated a plateau in speech perception performance prior to participation in the study. A short period of intensive psychophysical training produced small but significant gains in speech perception in noise and spectral discrimination ability. There remain questions about the most appropriate type of training and the duration or dosage of training that provides the most robust outcomes for adults with CIs.
Barlow, Nathan; Purdy, Suzanne C.; Sharma, Mridula; Giles, Ellen; Narne, Vijay
This study investigated whether a short intensive psychophysical auditory training program is associated with speech perception benefits and changes in cortical auditory evoked potentials (CAEPs) in adult cochlear implant (CI) users. Ten adult implant recipients trained approximately 7 hours on psychophysical tasks (Gap-in-Noise Detection, Frequency Discrimination, Spectral Rippled Noise [SRN], Iterated Rippled Noise, Temporal Modulation). Speech performance was assessed before and after training using Lexical Neighborhood Test (LNT) words in quiet and in eight-speaker babble. CAEPs evoked by a natural speech stimulus /baba/ with varying syllable stress were assessed pre- and post-training, in quiet and in noise. SRN psychophysical thresholds showed a significant improvement (78% on average) over the training period, but performance on other psychophysical tasks did not change. LNT scores in noise improved significantly post-training by 11% on average compared with three pretraining baseline measures. N1P2 amplitude changed post-training for /baba/ in quiet (p = 0.005, visit 3 pretraining versus visit 4 post-training). CAEP changes did not correlate with behavioral measures. CI recipients' clinical records indicated a plateau in speech perception performance prior to participation in the study. A short period of intensive psychophysical training produced small but significant gains in speech perception in noise and spectral discrimination ability. There remain questions about the most appropriate type of training and the duration or dosage of training that provides the most robust outcomes for adults with CIs. PMID:27587925
Shim, Miseon; Kim, Do-Won; Lee, Seung-Hwan; Im, Chang-Hwan
P300 deficits in patients with schizophrenia have previously been investigated using EEGs recorded during auditory oddball tasks. However, small-world cortical functional networks during auditory oddball tasks and their relationships with symptom severity scores in schizophrenia have not yet been investigated. In this study, the small-world characteristics of source-level functional connectivity networks of EEG responses elicited by an auditory oddball paradigm were evaluated using two representative graph-theoretical measures, clustering coefficient and path length. EEG signals from 34 patients with schizophrenia and 34 healthy controls were recorded while each subject was asked to attend to oddball tones. The results showed reduced clustering coefficients and increased path lengths in patients with schizophrenia, suggesting that the small-world functional network is disrupted in patients with schizophrenia. In addition, the negative and cognitive symptom components of positive and negative symptom scales were negatively correlated with the clustering coefficient and positively correlated with path length, demonstrating that both indices are indicators of symptom severity in patients with schizophrenia. Our study results suggest that disrupted small-world characteristics are potential biomarkers for patients with schizophrenia.
Kuriki, Shinya; Ohta, Keisuke; Koyama, Sachiko
Long-latency auditory-evoked magnetic field and potential show strong attenuation of N1m/N1 responses when an identical stimulus is presented repeatedly due to adaptation of auditory cortical neurons. This adaptation is weak in subsequently occurring P2m/P2 responses, being weaker for piano chords than single piano notes. The adaptation of P2m is more suppressed in musicians having long-term musical training than in nonmusicians, whereas the amplitude of P2 is enhanced preferentially in musicians as the spectral complexity of musical tones increases. To address the key issues of whether such high responsiveness of P2m/P2 responses to complex sounds is intrinsic and common to nonmusical sounds, we conducted a magnetoencephalographic study on participants who had no experience of musical training, using consecutive trains of piano and vowel sounds. The dipole moment of the P2m sources located in the auditory cortex indicated significantly suppressed adaptation in the right hemisphere both to piano and vowel sounds. Thus, the persistent responsiveness of the P2m activity may be inherent, not induced by intensive training, and common to spectrally complex sounds. The right hemisphere dominance of the responsiveness to musical and speech sounds suggests analysis of acoustic features of object sounds to be a significant function of P2m activity.
Folland, Nicole A; Butler, Blake E; Payne, Jennifer E; Trainor, Laurel J
Sound waves emitted by two or more simultaneous sources reach the ear as one complex waveform. Auditory scene analysis involves parsing a complex waveform into separate perceptual representations of the sound sources [Bregman, A. S. Auditory scene analysis: The perceptual organization of sounds. London: MIT Press, 1990]. Harmonicity provides an important cue for auditory scene analysis. Normally, harmonics at integer multiples of a fundamental frequency are perceived as one sound with a pitch corresponding to the fundamental frequency. However, when one harmonic in such a complex, pitch-evoking sound is sufficiently mistuned, that harmonic emerges from the complex tone and is perceived as a separate auditory object. Previous work has shown that the percept of two objects is indexed in both children and adults by the object-related negativity component of the ERP derived from EEG recordings [Alain, C., Arnott, S. T., & Picton, T. W. Bottom-up and top-down influences on auditory scene analysis: Evidence from event-related brain potentials. Journal of Experimental Psychology: Human Perception and Performance, 27, 1072-1089, 2001]. Here we examine the emergence of object-related responses to an 8% harmonic mistuning in infants between 2 and 12 months of age. Two-month-old infants showed no significant object-related response. However, in 4- to 12-month-old infants, a significant frontally positive component was present, and by 8-12 months, a significant frontocentral object-related negativity was present, similar to that seen in older children and adults. This is in accordance with previous research demonstrating that infants younger than 4 months of age do not integrate harmonic information to perceive pitch when the fundamental is missing [He, C., Hotson, L., & Trainor, L. J. Maturation of cortical mismatch mismatch responses to occasional pitch change in early infancy: Effects of presentation rate and magnitude of change. Neuropsychologia, 47, 218-229, 2009]. The
Putter-Katz, Hanna; Kishon-Rabin, Liat; Sachartov, Emma; Shabtai, Esther L; Sadeh, Michelle; Weiz, Raphael; Gadoth, Natan; Pratt, Hillel
Children with dyslexia have difficulties with phonological processing. It is assumed that deficits in auditory temporal processing underlie the phonological difficulties of dyslectic subjects (i.e. the processing of rapid acoustic changes that occur in speech). In this study we assessed behavioral and electrophysiological evoked brain responses of dyslectic and skilled reading children while performing a set of hierarchically structured auditory tasks. Stimuli consisted of auditory natural unmodified speech that was controlled for the parameter of changing rate of main acoustic cues: vowels (slowly changing speech cues: /i/ versus /u/) and consonant-vowel (CV) syllables (rapidly changing speech cues: /da/ versus /ga/). Brain auditory processing differed significantly between groups: reaction time of dyslectic readers was prolonged in identifying speech stimuli and increased with increased phonological demand. Latencies of auditory evoked responses (auditory event related potentials [AERPs]) recorded during syllable identification of the dyslectic group were prolonged relative to those of skilled readers. Moreover, N1 amplitudes during vowel processing were larger for the dyslectic children and P3 amplitudes during CV processing were smaller for the dyslectic children. From the results of this study it is evident that the latency and amplitude of AERPs are sensitive measures of the complexity of phonological processing in skilled and dyslectic readers. These results may be signs of deficient auditory processing of natural speech under normal listening conditions as a contributing factor to reading difficulties in dyslexia. Detecting a dysfunction in the central auditory processing pathway might lead to early detection of children who may benefit from phonetic-acoustic training methods.
This study consists of two experiments. Pitch, volume, and tempo in auditory-haptic geographic information systems were compared in terms of effectiveness for multimodal interface; volume was determined to be better. Auditory display with volume and haptic display with vibration were compared and the results showed that, in more complex geographic…
Jeong, Wooseob; Gluck, Myke
Investigated the feasibility of adding haptic and auditory displays to traditional visual geographic information systems (GISs). Explored differences in user performance, including task completion time and accuracy, and user satisfaction with a multimodal GIS which was implemented with a haptic display, auditory display, and combined display.…
Jeong, Wooseob; Gluck, Myke
Investigated the feasibility of adding haptic and auditory displays to traditional visual geographic information systems (GISs). Explored differences in user performance, including task completion time and accuracy, and user satisfaction with a multimodal GIS which was implemented with a haptic display, auditory display, and combined display.…
This study consists of two experiments. Pitch, volume, and tempo in auditory-haptic geographic information systems were compared in terms of effectiveness for multimodal interface; volume was determined to be better. Auditory display with volume and haptic display with vibration were compared and the results showed that, in more complex geographic…
Behler, Oliver; Uppenkamp, Stefan
Loudness is the perceptual correlate of the physical intensity of a sound. However, loudness judgments depend on a variety of other variables and can vary considerably between individual listeners. While functional magnetic resonance imaging (fMRI) has been extensively used to characterize the neural representation of physical sound intensity in the human auditory system, only few studies have also investigated brain activity in relation to individual loudness. The physiological correlate of loudness perception is not yet fully understood. The present study systematically explored the interrelation of sound pressure level, ear of entry, individual loudness judgments, and fMRI activation along different stages of the central auditory system and across hemispheres for a group of normal hearing listeners. 4-kHz-bandpass filtered noise stimuli were presented monaurally to each ear at levels from 37 to 97dB SPL. One diotic condition and a silence condition were included as control conditions. The participants completed a categorical loudness scaling procedure with similar stimuli before auditory fMRI was performed. The relationship between brain activity, as inferred from blood oxygenation level dependent (BOLD) contrasts, and both sound level and loudness estimates were analyzed by means of functional activation maps and linear mixed effects models for various anatomically defined regions of interest in the ascending auditory pathway and in the cortex. Our findings are overall in line with the notion that fMRI activation in several regions within auditory cortex as well as in certain stages of the ascending auditory pathway might be more a direct linear reflection of perceived loudness rather than of sound pressure level. The results indicate distinct functional differences between midbrain and cortical areas as well as between specific regions within auditory cortex, suggesting a systematic hierarchy in terms of lateralization and the representation of level and
Woolley, Sarah M. N.; Gill, Patrick R.; Fremouw, Thane; Theunissen, Frédéric E.
Auditory perception depends on the coding and organization of the information-bearing acoustic features of sounds by auditory neurons. We report here that auditory neurons can be classified into functional groups each of which plays a specific role in extracting distinct complex sound features. We recorded the electrophysiological responses of single auditory neurons in the songbird midbrain and forebrain to conspecific song, measured their tuning by calculating spectrotemporal receptive fields (STRFs) and classified them using multiple cluster analysis methods. Based on STRF shape, cells clustered into functional groups that divided the space of acoustical features into regions that represent cues for the fundamental acoustic percepts of pitch, timbre and rhythm. Four major groups were found in the midbrain and five major groups were found in the forebrain. Comparing STRFs in midbrain and forebrain neurons suggested that both inheritance and emergence of tuning properties occur as information ascends the auditory processing stream. PMID:19261874
Winkler, István; van Zuijen, Titia L.; Sussman, Elyse; Horváth, János; Näätänen, Risto
One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision. PMID:16836636
Maby, Emmanuel; Jeannes, Regine Le Bouquin; Faucon, Gerard
This study attempted to determine whether there is a localized effect of GSM (Global System for Mobile communications) microwaves by studying the Auditory Evoked Potentials (AEP) recorded at the scalp of nine healthy subjects and six epileptic patients. We determined the influence of GSM RadioFrequency (RF) on parameters characterizing the AEP in time or/and frequency domains. A parameter selection method using SVM (Support Vector Machines)-based criteria allowed us to estimate those most altered by the radiofrequencies. The topography of the parameter modifications was computed to determine the localization of the radiofrequency influence. A statistical test was conducted for selected scalp areas, in order to determine whether there were significant localized alterations due to the RF. The epileptic patients showed a lengthening of the scalp component N100 (100 ms latency) in the frontal area contralateral to the radiation, which may be due to an afferent tract alteration. For the healthy subjects, an amplitude increase of the P200 wave (200 ms latency) was identified in the frontal area. The present study suggests that radiofrequency fields emitted by mobile phones modify the AEP. Nevertheless, no direct link between these findings and RF-induced damages in brain function was established.
Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; de Carvalho, Ana Claudia Martinho; Matas, Carla Gentile
The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP); speech perception tests of the Glendonald Auditory Screening Procedure (GASP); Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS); and Meaningful Use of Speech Scales (MUSS). The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms). In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms). The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI. PMID:26881163
Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B
The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.
Leavitt, Victoria M.; Molholm, Sophie; Ritter, Walter; Shpaner, Marina; Foxe, John J.
Introduction Auditory sensory processing dysfunction is a core component of schizophrenia, with deficits occurring at 50 ms post-stimulus firmly established in the literature. Given that the initial afference of primary auditory cortex occurs at least 35 ms earlier, however, an essential question remains: how early in sensory processing do such deficits arise, and do they occur during initial cortical afference or earlier, which would implicate subcortical auditory dysfunction. Objective To establish the onset of the earliest deficits in auditory processing, we examined the time window demarcating the transition from subcortical to cortical processing: 10 ms to 50 ms during the so-called middle latency responses (MLRs). These remain to be adequately characterized in patients with schizophrenia. Methods We recorded auditory evoked potentials (AEPs) to simple tone-pips from 15 control subjects and 21 medicated patients with longer-term schizophrenia or schizoaffective disorder (illness duration 16 yr, standard deviation [SD] 9.4 yr), using high-density electrical scalp recordings. Between-group analyses assessed the integrity of the MLRs across groups. In addition, 2 source-localization models were conducted to address whether a distinction between subcortical and cortical generators of the MLRs can be made and whether evidence for differential dorsal and ventral pathway contributions to auditory processing deficits can be established. Results Robust auditory processing deficits were found for patients as early as 15 ms. Evidence for subcortical generators of the earliest MLR component (P20) was provided by source analysis. Topographical mapping and source localization also pointed to greater decrements in processing in the dorsal auditory pathway of patients, providing support for a theory of pervasive deficits that are organized along the lines of a dorsal–ventral distinction. Conclusions Auditory sensory dysfunction in schizophrenia begins extremely early in
Fernández de Molina y Cañas, A
After reviewing the concept of the specific and non specific thalamo-cortical systems, the connectivity of the relay and intralaminar nuclei is analyzed as well as the recent data concerning the chemical identity of thalamic neurones, the concept and distribution of "matrix" and "core" neurones and its functional role. The intrinsic electrical properties of thalamic neurones, its mode of discharge--depending of the membrane potential level--and its functional significance in the context of the brain's global activity are discussed. Of special interest are the studies on the effects of lesion of the relay and intralaminar nuclei as well as its repercussion in the interpretation of the sensory perception. After intralaminar nuclei lesion the individual is not aware of the nformation conveyed through the specific channels. It follows a discussion on the importance of the temporal and spatial mapping in the elaboration of perception and cognition. Due to the intrinsic electrical properties and the connectivity of thalamic neurones two groups of corticothalamic loops are generated, which resonate at a frequency of 40 Hz. The specific thalamo-cortical loops give the content of cognition and the no specific loop, the temporal binding required for the unity of the cognitive experience. Consciousness is then, a product of the resonant thalamo-cortical activity, and the dialogue between the thalamus and cortex, the process that generates subjectivity, the unique experience we all recognized as the existence of the "self".
Poeppel, David; Hickok, Gregory
Auditory processing is remarkably fast and sensitive to the precise temporal structure of acoustic signals over a range of scales, from submillisecond phenomena such as localization to the construction of elementary auditory attributes at tens of milliseconds to basic properties of speech and music at hundreds of milliseconds. In light of the rapid (and often transitory) nature of auditory phenomena, in order to investigate the neurocomputational basis of auditory perception and cognition, a technique with high temporal resolution is appropriate. Here we briefly outline the utility of magnetoencephalography (MEG) for the study of the neural basis of audition. The basics of MEG are outlined in brief, and some of the most-used neural responses are described. We discuss the classic transient evoked fields (e.g., M100), responses elicited by change in a stimulus (e.g., pitch-onset response), the auditory steady-state response, and neural oscillations (e.g., theta-phase tracking). Because of the high temporal resolution and the good spatial resolution of MEG, paired with the convenient location of human auditory cortex for MEG-based recording, electromagnetic recording of this type is well suited to investigate various aspects from audition, from crafted laboratory experiments on pitch perception or scene analysis to naturalistic speech and music tasks. © 2015 Elsevier B.V. All rights reserved.
Petrus, Emily; Rodriguez, Gabriela; Patterson, Ryan; Connor, Blaine; Kanold, Patrick O; Lee, Hey-Kyoung
Loss of a sensory modality leads to widespread changes in synaptic function across sensory cortices, which are thought to be the basis for cross-modal adaptation. Previous studies suggest that experience-dependent cross-modal regulation of the spared sensory cortices may be mediated by changes in cortical circuits. Here, we report that loss of vision, in the form of dark exposure (DE) for 1 week, produces laminar-specific changes in excitatory and inhibitory circuits in the primary auditory cortex (A1) of adult mice to promote feedforward (FF) processing and also strengthens intracortical inputs to primary visual cortex (V1). Specifically, DE potentiated FF excitatory synapses from layer 4 (L4) to L2/3 in A1 and recurrent excitatory inputs in A1-L4 in parallel with a reduction in the strength of lateral intracortical excitatory inputs to A1-L2/3. This suggests a shift in processing in favor of FF information at the expense of intracortical processing. Vision loss also strengthened inhibitory synaptic function in L4 and L2/3 of A1, but via laminar specific mechanisms. In A1-L4, DE specifically potentiated the evoked synaptic transmission from parvalbumin-positive inhibitory interneurons to principal neurons without changes in spontaneous miniature IPSCs (mIPSCs). In contrast, DE specifically increased the frequency of mIPSCs in A1-L2/3. In V1, FF excitatory inputs were unaltered by DE, whereas lateral intracortical connections in L2/3 were strengthened, suggesting a shift toward intracortical processing. Our results suggest that loss of vision produces distinct circuit changes in the spared and deprived sensory cortices to shift between FF and intracortical processing to allow adaptation.
Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J
Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.
Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul
Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception. PMID:27042360
Wu, Calvin; Stefanescu, Roxana A.; Martel, David T.
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems, and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body, and the auditory cortex. In this review, we explore the process of multisensory integration from 1) anatomical (inputs and connections), 2) physiological (cellular responses), 3) functional, and 4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing, and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698
Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.
Begault, Durand R. (Inventor)
Methods and systems for distinguishing an auditory alert signal from a background of one or more non-alert signals. In a first embodiment, a prefix signal, associated with an existing alert signal, is provided that has a signal component in each of three or more selected frequency ranges, with each signal component in each of three or more selected level at least 3-10 dB above an estimated background (non-alert) level in that frequency range. The alert signal may be chirped within one or more frequency bands. In another embodiment, an alert signal moves, continuously or discontinuously, from one location to another over a short time interval, introducing a perceived spatial modulation or jitter. In another embodiment, a weighted sum of background signals adjacent to each ear is formed, and the weighted sum is delivered to each ear as a uniform background; a distinguishable alert signal is presented on top of this weighted sum signal at one ear, or distinguishable first and second alert signals are presented at two ears of a subject.
A hallmark of the developing auditory cortex is the heightened plasticity in the critical period, during which acoustic inputs can indelibly alter cortical function. However, not all sounds in the natural acoustic environment are ethologically relevant. How does the auditory system resolve relevant sounds from the acoustic environment in such an early developmental stage when most associative learning mechanisms are not yet fully functional? What can the auditory system learn from one of the most important classes of sounds, animal vocalizations? How does naturalistic acoustic experience shape cortical sound representation and perception? To answer these questions, we need to consider an unusual strategy, statistical learning, where what the system needs to learn is embedded in the sensory input. Here, I will review recent findings on how certain statistical structures of natural animal vocalizations shape auditory cortical acoustic representations, and how cortical plasticity may underlie learned categorical sound perception. These results will be discussed in the context of human speech perception.
Anderson, William A.
Four auditory delivery systems and their implications for instructing handicapped children are discussed. Outlined are six potential benefits of applying technologies to education, such as making education more productive. Pointed out are potential uses of sub-channel radio (such as programming for the blind), of broadband communication (such as…
Lee, Adrian KC; Larson, Eric; Maddox, Ross K; Shinn-Cunningham, Barbara G
Over the last four decades, a range of different neuroimaging tools have been used to study human auditory attention, spanning from classic event-related potential studies using electroencephalography to modern multimodal imaging approaches (e.g., combining anatomical information based on magnetic resonance imaging with magneto- and electroencephalography). This review begins by exploring the different strengths and limitations inherent to different neuroimaging methods, and then outlines some common behavioral paradigms that have been adopted to study auditory attention. We argue that in order to design a neuroimaging experiment that produces interpretable, unambiguous results, the experimenter must not only have a deep appreciation of the imaging technique employed, but also a sophisticated understanding of perception and behavior. Only with the proper caveats in mind can one begin to infer how the cortex supports a human in solving the “cocktail party” problem. PMID:23850664
Punch, Simone; Van Dun, Bram; King, Alison; Carter, Lyndal; Pearce, Wendy
This article presents the clinical protocol that is currently being used within Australian Hearing for infant hearing aid evaluation using cortical auditory evoked potentials (CAEPs). CAEP testing is performed in the free field at two stimulus levels (65 dB sound pressure level [SPL], followed by 55 or 75 dB SPL) using three brief frequency-distinct speech sounds /m/, /ɡ/, and /t/, within a standard audiological appointment of up to 90 minutes. CAEP results are used to check or guide modifications of hearing aid fittings or to confirm unaided hearing capability. A retrospective review of 83 client files evaluated whether clinical practice aligned with the clinical protocol. It showed that most children could be assessed as part of their initial fitting program when they were identified as a priority for CAEP testing. Aided CAEPs were most commonly assessed within 8 weeks of the fitting. A survey of 32 pediatric audiologists provided information about their perception of cortical testing at Australian Hearing. The results indicated that clinical CAEP testing influenced audiologists' approach to rehabilitation and was well received by parents and that they were satisfied with the technique. Three case studies were selected to illustrate how CAEP testing can be used in a clinical environment. Overall, CAEP testing has been effectively integrated into the infant fitting program. PMID:27587921
Brown, Carolyn J; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul J
Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared with nonmusicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the acoustic change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and nonmusicians. Twenty individuals (10 musicians and 10 nonmusicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure. The ACC was recorded and used as an objective (i.e., nonbehavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. As a group, musicians were able to detect smaller changes in pitch than nonmusician. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the ripple noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than nonmusicians. Those differences are evident not only in perceptual/behavioral tests but also in electrophysiologic
Strait, Dana L.; Slater, Jessica; Abecassis, Victor; Kraus, Nina
Attention induces synchronicity in neuronal firing for the encoding of a given stimulus at the exclusion of others. Recently, we reported decreased variability in scalp-recorded cortical evoked potentials to attended compared with ignored speech in adults. Here we aimed to determine the developmental time course for this neural index of auditory…
Patel, Tirth R; Shahin, Antoine J; Bhat, Jyoti; Welling, D Bradley; Moberly, Aaron C
We describe a novel use of cortical auditory evoked potentials in the preoperative workup to determine ear candidacy for cochlear implantation. A 71-year-old male was evaluated who had a long-deafened right ear, had never worn a hearing aid in that ear, and relied heavily on use of a left-sided hearing aid. Electroencephalographic testing was performed using free field auditory stimulation of each ear independently with pure tones at 1000 and 2000 Hz at approximately 10 dB above pure-tone thresholds for each frequency and for each ear. Mature cortical potentials were identified through auditory stimulation of the long-deafened ear. The patient underwent successful implantation of that ear. He experienced progressively improving aided pure-tone thresholds and binaural speech recognition benefit (AzBio score of 74%). Findings suggest that use of cortical auditory evoked potentials may serve a preoperative role in ear selection prior to cochlear implantation. © The Author(s) 2016.
Simpson, Andrew J R; Harper, Nicol S; Reiss, Joshua D; McAlpine, David
Adaptation to both common and rare sounds has been independently reported in neurophysiological studies using probabilistic stimulus paradigms in small mammals. However, the apparent sensitivity of the mammalian auditory system to the statistics of incoming sound has not yet been generalized to task-related human auditory perception. Here, we show that human listeners selectively adapt to novel sounds within scenes unfolding over minutes. Listeners' performance in an auditory discrimination task remains steady for the most common elements within the scene but, after the first minute, performance improves for distinct and rare (oddball) sound elements, at the expense of rare sounds that are relatively less distinct. Our data provide the first evidence of enhanced coding of oddball sounds in a human auditory discrimination task and suggest the existence of an adaptive mechanism that tracks the long-term statistics of sounds and deploys coding resources accordingly.
Alho, Kimmo; Grimm, Sabine; Mateo-León, Sabina; Costa-Faidella, Jordi; Escera, Carles
Middle-latency auditory evoked potentials, indicating early cortical processing, elicited by pitch changes and repetitions in pure tones and by complex tones with a missing-fundamental pitch were recorded in healthy adults ignoring the sounds while watching a silenced movie. Both for the pure and for the missing-fundamental tones, the Nb middle-latency response was larger for pitch changes (tones preceded by tones of different pitch) than for pitch repetitions (tones preceded by tones of the same pitch). This Nb enhancement was observed even for missing-fundamental tones preceded by repeated tones that had a different missing-fundamental pitch but included all harmonics of the subsequent tone with another missing-fundamental pitch. This finding rules out the possibility that the Nb enhancement in response to a change in missing-fundamental pitch was simply attributable to the activity of auditory cortex neurons responding specifically to the harmonics of missing-fundamental tones. The Nb effect presumably indicates pitch processing at or near the primary auditory cortex, and it was followed by a change-related enhancement of the N1 response, presumably generated in the secondary auditory cortex. This N1 enhancement might have been caused by a mismatch negativity response overlapping with the N1 response. Processing of missing-fundamental pitch was also reflected by the distribution of Nb responses. Tones with a higher missing-fundamental pitch elicited more frontally dominant Nb responses than tones with a lower missing-fundamental pitch. This effect of pitch, not seen for the pure tones, might indicate that the exact location of the Nb generator source in the auditory cortex depends on the missing-fundamental pitch of the eliciting tone.
Ahveninen, Jyrki; Huang, Samantha; Ahlfors, Seppo P; Hämäläinen, Matti; Rossi, Stephanie; Sams, Mikko; Jääskeläinen, Iiro P
Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magneto- and electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black cat and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150ms) activity in posterior ACs, spreading to left anterior ACs at 250-450ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.
Joachimsthaler, Bettina; Uhlmann, Michaela; Miller, Frank; Ehret, Günter; Kurt, Simone
Because of its great genetic potential, the mouse (Mus musculus) has become a popular model species for studies on hearing and sound processing along the auditory pathways. Here, we present the first comparative study on the representation of neuronal response parameters to tones in primary and higher-order auditory cortical fields of awake mice. We quantified 12 neuronal properties of tone processing in order to estimate similarities and differences of function between the fields, and to discuss how far auditory cortex (AC) function in the mouse is comparable to that in awake monkeys and cats. Extracellular recordings were made from 1400 small clusters of neurons from cortical layers III/IV in the primary fields AI (primary auditory field) and AAF (anterior auditory field), and the higher-order fields AII (second auditory field) and DP (dorsoposterior field). Field specificity was shown with regard to spontaneous activity, correlation between spontaneous and evoked activity, tone response latency, sharpness of frequency tuning, temporal response patterns (occurrence of phasic responses, phasic-tonic responses, tonic responses, and off-responses), and degree of variation between the characteristic frequency (CF) and the best frequency (BF) (CF–BF relationship). Field similarities were noted as significant correlations between CFs and BFs, V-shaped frequency tuning curves, similar minimum response thresholds and non-monotonic rate-level functions in approximately two-thirds of the neurons. Comparative and quantitative analyses showed that the measured response characteristics were, to various degrees, susceptible to influences of anesthetics. Therefore, studies of neuronal responses in the awake AC are important in order to establish adequate relationships between neuronal data and auditory perception and acoustic response behavior. PMID:24506843
Scharinger, Mathias; Monahan, Philip J.; Idsardi, William J.
While previous research has established that language-specific knowledge influences early auditory processing, it is still controversial as to what aspects of speech sound representations determine early speech perception. Here, we propose that early processing primarily depends on information propagated top-down from abstractly represented speech sound categories. In particular, we assume that mid-vowels (as in ‘bet’) exert less top-down effects than the high-vowels (as in ‘bit’) because of their less specific (default) tongue height position as compared to either high- or low-vowels (as in ‘bat’). We tested this assumption in a Magnetoencephalographic (MEG) study where we contrasted mid- and high-vowels, as well as the low- and high-vowels in a passive oddball paradigm. Overall, significant differences between deviants and standards indexed reliable mismatch-negativity (MMN) responses between 200 and 300 ms post stimulus onset. MMN amplitudes differed in the mid/high-vowel contrasts and were significantly reduced when a mid-vowel standard was followed by a high-vowel deviant, extending previous findings. Furthermore, mid-vowel standards showed reduced oscillatory power in the pre-stimulus beta-frequency band (18–26 Hz), compared to high-vowel standards. We take this as converging evidence for linguistic category structure to exert top-down influences on auditory processing. The findings are interpreted within the linguistic model of underspecification and the neuropsychological predictive coding framework. PMID:26780574
Scharinger, Mathias; Monahan, Philip J; Idsardi, William J
While previous research has established that language-specific knowledge influences early auditory processing, it is still controversial as to what aspects of speech sound representations determine early speech perception. Here, we propose that early processing primarily depends on information propagated top-down from abstractly represented speech sound categories. In particular, we assume that mid-vowels (as in 'bet') exert less top-down effects than the high-vowels (as in 'bit') because of their less specific (default) tongue height position as compared to either high- or low-vowels (as in 'bat'). We tested this assumption in a magnetoencephalography (MEG) study where we contrasted mid- and high-vowels, as well as the low- and high-vowels in a passive oddball paradigm. Overall, significant differences between deviants and standards indexed reliable mismatch negativity (MMN) responses between 200 and 300ms post-stimulus onset. MMN amplitudes differed in the mid/high-vowel contrasts and were significantly reduced when a mid-vowel standard was followed by a high-vowel deviant, extending previous findings. Furthermore, mid-vowel standards showed reduced oscillatory power in the pre-stimulus beta-frequency band (18-26Hz), compared to high-vowel standards. We take this as converging evidence for linguistic category structure to exert top-down influences on auditory processing. The findings are interpreted within the linguistic model of underspecification and the neuropsychological predictive coding framework.
Baumann, Simon; Joly, Olivier; Rees, Adrian; Petkov, Christopher I; Sun, Li; Thiele, Alexander; Griffiths, Timothy D
Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area. DOI: http://dx.doi.org/10.7554/eLife.03256.001 PMID:25590651
Müller, Nadia; Lorenz, Isabel; Langguth, Berthold; Weisz, Nathan
Chronic tinnitus, the continuous perception of a phantom sound, is a highly prevalent audiological symptom. A promising approach for the treatment of tinnitus is repetitive transcranial magnetic stimulation (rTMS) as this directly affects tinnitus-related brain activity. Several studies indeed show tinnitus relief after rTMS, however effects are moderate and vary strongly across patients. This may be due to a lack of knowledge regarding how rTMS affects oscillatory activity in tinnitus sufferers and which modulations are associated with tinnitus relief. In the present study we examined the effects of five different stimulation protocols (including sham) by measuring tinnitus loudness and tinnitus-related brain activity with Magnetoencephalography before and after rTMS. Changes in oscillatory activity were analysed for the stimulated auditory cortex as well as for the entire brain regarding certain frequency bands of interest (delta, theta, alpha, gamma). In line with the literature the effects of rTMS on tinnitus loudness varied strongly across patients. This variability was also reflected in the rTMS effects on oscillatory activity. Importantly, strong reductions in tinnitus loudness were associated with increases in alpha power in the stimulated auditory cortex, while an unspecific decrease in gamma and alpha power, particularly in left frontal regions, was linked to an increase in tinnitus loudness. The identification of alpha power increase as main correlate for tinnitus reduction sheds further light on the pathophysiology of tinnitus. This will hopefully stimulate the development of more effective therapy approaches. PMID:23390539
Chandrasekaran, Chandramouli; Lemus, Luis; Ghazanfar, Asif A.
How low-level sensory areas help mediate the detection and discrimination advantages of integrating faces and voices is the subject of intense debate. To gain insights, we investigated the role of the auditory cortex in face/voice integration in macaque monkeys performing a vocal-detection task. Behaviorally, subjects were slower to detect vocalizations as the signal-to-noise ratio decreased, but seeing mouth movements associated with vocalizations sped up detection. Paralleling this behavioral relationship, as the signal to noise ratio decreased, the onset of spiking responses were delayed and magnitudes were decreased. However, when mouth motion accompanied the vocalization, these responses were uniformly faster. Conversely, and at odds with previous assumptions regarding the neural basis of face/voice integration, changes in the magnitude of neural responses were not related consistently to audiovisual behavior. Taken together, our data reveal that facilitation of spike latency is a means by which the auditory cortex partially mediates the reaction time benefits of combining faces and voices. PMID:24218574
Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha
The voice is a rich source of information, which the human brain has evolved to decode and interpret. Empirical observations have shown that the human auditory system is especially sensitive to the human voice, and that activity within the voice-sensitive regions of the primary and secondary auditory cortex is modulated by the emotional quality of the vocal signal, and may therefore subserve, with frontal regions, the cognitive ability to correctly identify the speaker's affective state. So far, the network involved in the processing of vocal affect has been mainly characterised at the cortical level. However, anatomical and functional evidence suggests that acoustic information relevant to the affective quality of the auditory signal might be processed prior to the auditory cortex. Here we review the animal and human literature on the main subcortical structures along the auditory pathway, and propose a model whereby the distinction between different types of vocal affect in auditory communication begins at very early stages of auditory processing, and relies on the analysis of individual acoustic features of the sound signal. We further suggest that this early feature-based decoding occurs at a subcortical level along the ascending auditory pathway, and provides a preliminary coarse (but fast) characterisation of the affective quality of the auditory signal before the more refined (but slower) cortical processing is completed.
Shamma, Shihab A; Micheyl, Christophe
'Auditory scenes' often contain contributions from multiple acoustic sources. These are usually heard as separate auditory 'streams', which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the past two years indicate that both cortical and subcortical processes contribute to the formation of auditory streams, and they raise important questions concerning the roles of primary and secondary areas of auditory cortex in this phenomenon. In addition, these findings underline the importance of taking into account the relative timing of neural responses, and the influence of selective attention, in the search for neural correlates of the perception of auditory streams.
Race, Nicholas; Lai, Jesyin; Shi, Riyi; Bartlett, Edward L
Hearing difficulties are the most commonly reported disabilities among veterans. Blast exposures during explosive events likely play a role, given their propensity to directly damage both peripheral (PAS) and central (CAS) auditory system components. Post-blast PAS pathophysiology has been well-documented in both clinical case reports and laboratory investigations. In contrast, blast-induced CAS dysfunction remains under-studied, but has been hypothesized to contribute to an array of common veteran behavioral complaints including learning, memory, communication, and emotional regulation. This investigation compared the effects of acute blast and non-blast acoustic impulse trauma in adult male Sprague-Dawley rats. An array of audiometric tests were utilized, including distortion product otoacoustic emissions (DPOAE), auditory brainstem responses (ABR), middle latency responses (MLR), and envelope following responses (EFR). Generally, more severe and persistent post-injury central auditory processing (CAP) deficits were observed in blast-exposed animals throughout the auditory neuraxis, spanning from the cochlea to the cortex. DPOAE and ABR results captured cochlear and auditory nerve/brainstem deficits, respectively. EFRs demonstrated temporal processing impairments suggestive of functional damage to regions in the auditory brainstem and the inferior colliculus. MLRs captured thalamocortical transmission and cortical activation impairments. Taken together, the results suggest blast-induced CAS dysfunction may play a complementary pathophysiologic role to maladaptive neuroplasticity of PAS origin. Even mild blasts can produce lasting hearing impairments that can be assessed with non-invasive electrophysiology, allowing these measurements to serve as simple, effective diagnostics.
Bonte, Milene; Ley, Anke; Scharke, Wolfgang; Formisano, Elia
Development typically leads to optimized and adaptive neural mechanisms for the processing of voice and speech. In this fMRI study we investigated how this adaptive processing reaches its mature efficiency by examining the effects of task, age and phonological skills on cortical responses to voice and speech in children (8-9years), adolescents (14-15years) and adults. Participants listened to vowels (/a/, /i/, /u/) spoken by different speakers (boy, girl, man) and performed delayed-match-to-sample tasks on vowel and speaker identity. Across age groups, similar behavioral accuracy and comparable sound evoked auditory cortical fMRI responses were observed. Analysis of task-related modulations indicated a developmental enhancement of responses in the (right) superior temporal cortex during the processing of speaker information. This effect was most evident through an analysis based on individually determined voice sensitive regions. Analysis of age effects indicated that the recruitment of regions in the temporal-parietal cortex and posterior cingulate/cingulate gyrus decreased with development. Beyond age-related changes, the strength of speech-evoked activity in left posterior and right middle superior temporal regions significantly scaled with individual differences in phonological skills. Together, these findings suggest a prolonged development of the cortical functional network for speech and voice processing. This development includes a progressive refinement of the neural mechanisms for the selection and analysis of auditory information relevant to the ongoing behavioral task.
Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C
While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory.
Barbour, Dennis L.
The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. PMID:21540053
Cone-Wesson, Barbara; Wunderlich, Julia
The audiological applications of cortical auditory evoked potentials are reviewed. Cortical auditory evoked potentials have some advantages compared with more commonly used techniques such as the auditory brainstem response, because they are more closely tied to perception and can be evoked by complex sounds such as speech. These response characteristics suggest that these potentials could be used clinically in the estimation of threshold and also to assess speech discrimination and perception. Clinical uses of auditory evoked potentials include threshold estimation and their use as an electrophysiological index of auditory system development, auditory discrimination and speech perception, and the benefits from cochlear implantation, auditory training, or amplification. Cortical auditory evoked potentials obtained in passively alert adults have a remarkably high correspondence with perceptual threshold. Acoustic features of complex sounds may be reflected in the waveform and latency of these potentials and so might be used to determine the integrity of neural encoding for such features and thus contribute to speech perception assessment. MMN and P3 have been used to discern discrimination abilities among groups of normal-hearing and hearing-impaired individuals; however, their sensitivity and specificity for testing an individual's abilities has not yet been established. Cortical auditory potentials are affected by listening experience and attention and so could be used to gauge the effects of aural habilitation. The presence of cortical potentials in children with auditory neuropathy appears to indicate residual hearing abilities. Parametric and developmental research is needed to further establish these applications in audiology.
Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.
Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543
Teng, Xiangbin; Tian, Xing; Poeppel, David
Natural sounds contain information on multiple timescales, so the auditory system must analyze and integrate acoustic information on those different scales to extract behaviorally relevant information. However, this multi-scale process in the auditory system is not widely investigated in the literature, and existing models of temporal integration are mainly built upon detection or recognition tasks on a single timescale. Here we use a paradigm requiring processing on relatively ‘local’ and ‘global’ scales and provide evidence suggesting that the auditory system extracts fine-detail acoustic information using short temporal windows and uses long temporal windows to abstract global acoustic patterns. Behavioral task performance that requires processing fine-detail information does not improve with longer stimulus length, contrary to predictions of previous temporal integration models such as the multiple-looks and the spectro-temporal excitation pattern model. Moreover, the perceptual construction of putatively ‘unitary’ auditory events requires more than hundreds of milliseconds. These findings support the hypothesis of a dual-scale processing likely implemented in the auditory cortex. PMID:27713546
Du, Yi; Buchsbaum, Bradley R; Grady, Cheryl L; Alain, Claude
Although it is well accepted that the speech motor system (SMS) is activated during speech perception, the functional role of this activation remains unclear. Here we test the hypothesis that the redundant motor activation contributes to categorical speech perception under adverse listening conditions. In this functional magnetic resonance imaging study, participants identified one of four phoneme tokens (/ba/, /ma/, /da/, or /ta/) under one of six signal-to-noise ratio (SNR) levels (-12, -9, -6, -2, 8 dB, and no noise). Univariate and multivariate pattern analyses were used to determine the role of the SMS during perception of noise-impoverished phonemes. Results revealed a negative correlation between neural activity and perceptual accuracy in the left ventral premotor cortex and Broca's area. More importantly, multivoxel patterns of activity in the left ventral premotor cortex and Broca's area exhibited effective phoneme categorization when SNR ≥ -6 dB. This is in sharp contrast with phoneme discriminability in bilateral auditory cortices and sensorimotor interface areas (e.g., left posterior superior temporal gyrus), which was reliable only when the noise was extremely weak (SNR > 8 dB). Our findings provide strong neuroimaging evidence for a greater robustness of the SMS than auditory regions for categorical speech perception in noise. Under adverse listening conditions, better discriminative activity in the SMS may compensate for loss of specificity in the auditory system via sensorimotor integration.
Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H
Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.
Kanwisher, Nancy; McDermott, Josh H.
Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce “resolved” peaks of excitation in the cochlea, whereas others are “unresolved,” providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior. PMID:24336712
Li, Shu-Chen; Passow, Susanne; Nietfeld, Wilfried; Schröder, Julia; Bertram, Lars; Heekeren, Hauke R; Lindenberger, Ulman
Using a specific variant of the dichotic listening paradigm, we studied the influence of dopamine on attentional modulation of auditory perception by assessing effects of allelic variation of a single-nucleotide polymorphism (SNP) rs907094 in the DARPP-32 gene (dopamine and adenosine 3', 5'-monophosphate-regulated phosphoprotein 32 kilodations; also known as PPP1R1B) on behavior and cortical evoked potentials. A frequent DARPP-32 haplotype that includes the A allele of this SNP is associated with higher mRNA expression of DARPP-32 protein isoforms, striatal dopamine receptor function, and frontal-striatal connectivity. As we hypothesized, behaviorally the A homozygotes were more flexible in selectively attending to auditory inputs than any G carriers. Moreover, this genotype also affected auditory evoked cortical potentials that reflect early sensory and late attentional processes. Specifically, analyses of event-related potentials (ERPs) revealed that amplitudes of an early component of sensory selection (N1) and a late component (N450) reflecting attentional deployment for conflict resolution were larger in A homozygotes than in any G carriers. Taken together, our data lend support for dopamine's role in modulating auditory attention both during the early sensory selection and late conflict resolution stages.
Hunter, Michael; Villarreal, Gerardo; McHaffie, Greg R; Jimenez, Billy; Smith, Ashley K; Calais, Lawrence A; Hanlon, Faith; Thoma, Robert J; Cañive, José M
Auditory sensory gating deficits have been reported in subjects with post-traumatic stress disorder (PTSD), but the hemispheric and neuronal origins of this deficit are not well understood. The objectives of this study were to: (1) investigate auditory sensory gating of the 50-ms response (M50) in patients diagnosed with PTSD by utilizing magnetoencephalography (MEG); (2) explore the relationship between M50 sensory gating and cortical thickness of the superior temporal gyrus (STG) measured with structural magnetic resonance imaging (MRI); and (3) examine the association between PTSD symptomatology and bilateral sensory gating. Seven participants with combat-related PTSD and eleven controls underwent the paired-click sensory gating paradigm. MEG localized M50 neuronal generators to the STG in both groups. The PTSD group displayed impaired M50 gating in the right hemisphere. Thinner right STG cortical thickness was associated with worse right sensory gating in the PTSD group. The right S1 M50 source strength and gating ratio were correlated with PTSD symptomatology. These findings suggest that the structural integrity of right hemisphere STG cortices play an important role in auditory sensory gating deficits in PTSD. Published by Elsevier Ireland Ltd.
Hanss, Julien; Veuillet, Evelyne; Adjout, Kamel; Besle, Julien; Collet, Lionel; Thai-Van, Hung
Background In normal-hearing subjects, monaural stimulation produces a normal pattern of asynchrony and asymmetry over the auditory cortices in favour of the contralateral temporal lobe. While late onset unilateral deafness has been reported to change this pattern, the exact influence of the side of deafness on central auditory plasticity still remains unclear. The present study aimed at assessing whether left-sided and right-sided deafness had differential effects on the characteristics of neurophysiological responses over auditory areas. Eighteen unilaterally deaf and 16 normal hearing right-handed subjects participated. All unilaterally deaf subjects had post-lingual deafness. Long latency auditory evoked potentials (late-AEPs) were elicited by two types of stimuli, non-speech (1 kHz tone-burst) and speech-sounds (voiceless syllable/pa/) delivered to the intact ear at 50 dB SL. The latencies and amplitudes of the early exogenous components (N100 and P150) were measured using temporal scalp electrodes. Results Subjects with left-sided deafness showed major neurophysiological changes, in the form of a more symmetrical activation pattern over auditory areas in response to non-speech sound and even a significant reversal of the activation pattern in favour of the cortex ipsilateral to the stimulation in response to speech sound. This was observed not only for AEP amplitudes but also for AEP time course. In contrast, no significant changes were reported for late-AEP responses in subjects with right-sided deafness. Conclusion The results show that cortical reorganization induced by unilateral deafness mainly occurs in subjects with left-sided deafness. This suggests that anatomical and functional plastic changes are more likely to occur in the right than in the left auditory cortex. The possible perceptual correlates of such neurophysiological changes are discussed. PMID:19309511
Gold, Joshua R.; Bajo, Victoria M.
The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighing of connections in neural networks putatively required for optimizing performance and behavior. As an avenue for investigation, studies centered around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple—if not all—levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioral implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism's competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information. PMID:24904256
Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies
The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'.
The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.
Friauf, Eckhard; Fischer, Alexander U; Fuhr, Martin F
Synaptic transmission via chemical synapses is dynamic, i.e., the strength of postsynaptic responses may change considerably in response to repeated synaptic activation. Synaptic strength is increased during facilitation, augmentation and potentiation, whereas a decrease in synaptic strength is characteristic for depression and attenuation. This review attempts to discuss the literature on short-term and long-term synaptic plasticity in the auditory brainstem of mammals and birds. One hallmark of the auditory system, particularly the inner ear and lower brainstem stations, is information transfer through neurons that fire action potentials at very high frequency, thereby activating synapses >500 times per second. Some auditory synapses display morphological specializations of the presynaptic terminals, e.g., calyceal extensions, whereas other auditory synapses do not. The review focuses on short-term depression and short-term facilitation, i.e., plastic changes with durations in the millisecond range. Other types of short-term synaptic plasticity, e.g., posttetanic potentiation and depolarization-induced suppression of excitation, will be discussed much more briefly. The same holds true for subtypes of long-term plasticity, like prolonged depolarizations and spike-time-dependent plasticity. We also address forms of plasticity in the auditory brainstem that do not comprise synaptic plasticity in a strict sense, namely short-term suppression, paired tone facilitation, short-term adaptation, synaptic adaptation and neural adaptation. Finally, we perform a meta-analysis of 61 studies in which short-term depression (STD) in the auditory system is opposed to short-term depression at non-auditory synapses in order to compare high-frequency neurons with those that fire action potentials at a lower rate. This meta-analysis reveals considerably less STD in most auditory synapses than in non-auditory ones, enabling reliable, failure-free synaptic transmission even at
Ouda, Ladislav; Profant, Oliver; Syka, Josef
Aging is accompanied by the deterioration of hearing that complicates our understanding of speech, especially in noisy environments. This deficit is partially caused by the loss of hair cells as well as by the dysfunction of the stria vascularis. However, the central part of the auditory system is also affected by processes accompanying aging that may run independently of those affecting peripheral receptors. Here, we review major changes occurring in the central part of the auditory system during aging. Most of the information that is focused on age-related changes in the central auditory system of experimental animals arises from experiments using immunocytochemical targeting on changes in the glutamic-acid-decarboxylase, parvalbumin, calbindin and calretinin. These data are accompanied by information about age-related changes in the number of neurons as well as about changes in the behavior of experimental animals. Aging is in principle accompanied by atrophy of the gray as well as white matter, resulting in the enlargement of the cerebrospinal fluid space. The human auditory cortex suffers not only from atrophy but also from changes in the content of some metabolites in the aged brain, as shown by magnetic resonance spectroscopy. In addition to this, functional magnetic resonance imaging reveals differences between activation of the central auditory system in the young and old brain. Altogether, the information reviewed in this article speaks in favor of specific age-related changes in the central auditory system that occur mostly independently of the changes in the inner ear and that form the basis of the central presbycusis.
Nekrassov, Vladimir; Sitges, María
Here we investigate the effect of the neuroprotective drug, vinpocetine on the epileptic cortical activity, on the alterations of the later waves of brainstem auditory evoked potentials (BAEPs) and on the hearing decline induced by the convulsing agent, pentylenetetrazole (PTZ). Vinpocetine at doses from 2 to 10 mg/kg inhibits the tonic-clonic convulsions induced by PTZ (100 mg/kg). Vinpocetine injected at a dose of 2 mg/kg 4 h before PTZ completely prevents the characteristic electroencephalogram (EEG) changes induced by PTZ for the ictal and post-ictal periods. Vinpocetine also abolished the PTZ-induced changes in the amplitude and latency of the later waves of the BAEPs in response to pure tone burst monoaural stimuli (frequency 8 or 4 kHz intensity 100 dB), and the PTZ-induced increase in the BAEP threshold. These results show the antiepileptic potential of vinpocetine and indicate the capability of vinpocetine to prevent the changes in the BAEP waves associated with the hearing loss observed during generalized epilepsy.
Billings, Curtis J.; Papesh, Melissa A.; Penman, Tina M.; Baltzell, Lucas S.; Gallun, Frederick J.
The clinical usefulness of aided cortical auditory evoked potentials (CAEPs) remains unclear despite several decades of research. One major contributor to this ambiguity is the wide range of variability across published studies and across individuals within a given study; some results demonstrate expected amplification effects, while others demonstrate limited or no amplification effects. Recent evidence indicates that some of the variability in amplification effects may be explained by distinguishing between experiments that focused on physiological detection of a stimulus versus those that differentiate responses to two audible signals, or physiological discrimination. Herein, we ask if either of these approaches is clinically feasible given the inherent challenges with aided CAEPs. N1 and P2 waves were elicited from 12 noise-masked normal-hearing individuals using hearing-aid-processed 1000-Hz pure tones. Stimulus levels were varied to study the effect of hearing-aid-signal/hearing-aid-noise audibility relative to the noise-masked thresholds. Results demonstrate that clinical use of aided CAEPs may be justified when determining whether audible stimuli are physiologically detectable relative to inaudible signals. However, differentiating aided CAEPs elicited from two suprathreshold stimuli (i.e., physiological discrimination) is problematic and should not be used for clinical decision making until a better understanding of the interaction between hearing-aid-processed stimuli and CAEPs can be established. PMID:23093964
Ching, Teresa Y C; Zhang, Vicky W; Hou, Sanna; Van Buynder, Patricia
Hearing loss in children is detected soon after birth via newborn hearing screening. Procedures for early hearing assessment and hearing aid fitting are well established, but methods for evaluating the effectiveness of amplification for young children are limited. One promising approach to validating hearing aid fittings is to measure cortical auditory evoked potentials (CAEPs). This article provides first a brief overview of reports on the use of CAEPs for evaluation of hearing aids. Second, a study that measured CAEPs to evaluate nonlinear frequency compression (NLFC) in hearing aids for 27 children (between 6.1 and 16.8 years old) who have mild to severe hearing loss is reported. There was no significant difference in aided sensation level or the detection of CAEPs for /g/ between NLFC on and off conditions. The activation of NLFC was associated with a significant increase in aided sensation levels for /t/ and /s/. It also was associated with an increase in detection of CAEPs for /t/ and /s/. The findings support the use of CAEPs for checking audibility provided by hearing aids. Based on the current data, a clinical protocol for using CAEPs to validate audibility with amplification is presented.
Ching, Teresa Y. C.; Zhang, Vicky W.; Hou, Sanna; Van Buynder, Patricia
Hearing loss in children is detected soon after birth via newborn hearing screening. Procedures for early hearing assessment and hearing aid fitting are well established, but methods for evaluating the effectiveness of amplification for young children are limited. One promising approach to validating hearing aid fittings is to measure cortical auditory evoked potentials (CAEPs). This article provides first a brief overview of reports on the use of CAEPs for evaluation of hearing aids. Second, a study that measured CAEPs to evaluate nonlinear frequency compression (NLFC) in hearing aids for 27 children (between 6.1 and 16.8 years old) who have mild to severe hearing loss is reported. There was no significant difference in aided sensation level or the detection of CAEPs for /g/ between NLFC on and off conditions. The activation of NLFC was associated with a significant increase in aided sensation levels for /t/ and /s/. It also was associated with an increase in detection of CAEPs for /t/ and /s/. The findings support the use of CAEPs for checking audibility provided by hearing aids. Based on the current data, a clinical protocol for using CAEPs to validate audibility with amplification is presented. PMID:27587920
Viaene, Angela N.; Petrof, Iraklis; Sherman, S. Murray
The classification of synaptic inputs is an essential part of understanding brain circuitry. In the present study, we examined the synaptic properties of thalamic inputs to pyramidal neurons in layers 5a, 5b, and 6 of primary somatosensory (S1) and auditory (A1) cortices in mouse thalamocortical slices. Stimulation of the ventral posterior medial nucleus (VPM) and the ventral division of the medial geniculate body (MGBv) resulted in three distinct response classes, two of which have never been described before in thalamocortical projections. Class 1A responses included synaptic depression and all-or-none responses while Class 1B responses exhibited synaptic depression and graded responses. Class 1C responses are characterized by mixed facilitation and depression as well as graded responses. Activation of metabotropic glutamate receptors was not observed in any of the response classes. We conclude that Class 1 responses can be broken up into three distinct subclasses, and that thalamic inputs to the subgranular layers of cortex may combine with other, intracortical inputs to drive their postsynaptic target cells. We also integrate these results with our recent, analogous study of thalamocortical inputs to granular and supragranular layers (Viaene et al., 2011). PMID:21900553
Okada, Kayoko; Hickok, Gregory
Visual speech (lip-reading) influences the perception of heard speech. The literature suggests at least two possible mechanisms for this influence: "direct" sensory-sensory interaction, whereby sensory signals from auditory and visual modalities are integrated directly, likely in the superior temporal sulcus, and "indirect" sensory-motor interaction, whereby visual speech is first mapped onto motor-speech representations in the frontal lobe, which in turn influences sensory perception via sensory-motor integration networks. We hypothesize that both mechanisms exist, and further that previous demonstrations of lip-reading functional activations in Broca's region and the posterior planum temporale reflect the sensory-motor mechanism. We tested one prediction of this hypothesis using fMRI. We assessed whether viewing visual speech (contrasted with facial gestures) activates the same network as a speech sensory-motor integration task (listen to and then silently rehearse speech). Both tasks activated locations within Broca's area, dorsal premotor cortex, and the posterior planum temporal (Spt), and focal regions of the STS, all of which have previously been implicated in sensory-motor integration for speech. This finding is consistent with the view that visual speech influences heard speech via sensory-motor networks. Lip-reading also activated a much wider network in the superior temporal lobe than the sensory-motor task, possibly reflecting a more direct cross-sensory integration network.
Wotton, Janine; McArthur, Kimberly; Bohara, Amit; Ferragamo, Michael; Megela Simmons, Andrea
Extracellular recordings from the auditory midbrain, Torus semicircularis, of the leopard frog reveal a wide diversity of tuning patterns. Some cells seem to be well suited for time-based coding of signal envelope, and others for rate-based coding of signal frequency. Adaptation for ongoing stimuli plays a significant role in shaping the frequency-dependent response rate at different levels of the frog auditory system. Anuran auditory-nerve fibers are unusual in that they reveal frequency-dependent adaptation [A. L. Megela, J. Acoust. Soc. Am. 75, 1155-1162 (1984)], and therefore provide rate-based input. In order to examine the influence of these peripheral inputs on central responses, three layers of auditory neurons were modeled to examine short-term neural adaptation to pure tones and complex signals. The response of each neuron was simulated with a leaky integrate and fire model, and adaptation was implemented by means of an increasing threshold. Auditory-nerve fibers, dorsal medullary nucleus neurons, and toral cells were simulated and connected in three ascending layers. Modifying the adaptation properties of the peripheral fibers dramatically alters the response at the midbrain. [Work supported by NOHR to M.J.F.; Gustavus Presidential Scholarship to K.McA.; NIH DC05257 to A.M.S.
Grimm, Sabine; Escera, Carles
The fast detection of novel or deviant stimuli is a striking property of the auditory processing which reflects basic organizational principles of the auditory system and at the same time is of high practical significance. In human electrophysiology, deviance detection has been related to the occurrence of the mismatch negativity (MMN)--a component of the event-related potential (ERP) evoked 100 to 250 ms after the occurrence of a rare irregular sound. Recently, it has been shown in animal studies that a considerable portion of neurons in the auditory pathway exhibits a property called stimulus-specific adaptation enabling them to encode inter-sound relationships and to discharge at higher rates to rare changes in the acoustic stimulation. These neural responses have been linked to the deviant-evoked potential measured at the human scalp, but such responses occur at lower levels anatomically (e.g. the primary auditory cortex as well as the inferior colliculi) and are elicited earlier (20-30 ms after sound onset) in comparison to MMN. Further, they are not considerable enough in size to be interpreted as a direct neural correlate of the MMN. We review here a series of recent findings that provides a first step toward filling this gap between animal and human recordings by showing that comparably early modulations due to a sound's deviancy can be observed in humans, particularly in the middle-latency portion of the ERP within the first 50 ms after sound onset. The existence of those early indices of deviance detection preceding the well-studied MMN component strongly supports the idea that the encoding of regularities and the detection of violations is a basic principle of human auditory processing acting on multiple levels. This sustains the notion of a hierarchically organized novelty and deviance detection system in the human auditory system.
Kwok, Veronica P. Y.; Dan, Guo; Yakpo, Kofi; Matthews, Stephen; Fox, Peter T.; Li, Ping; Tan, Li-Hai
The neural systems of lexical tone processing have been studied for many years. However, previous findings have been mixed with regard to the hemispheric specialization for the perception of linguistic pitch patterns in native speakers of tonal language. In this study, we performed two activation likelihood estimation (ALE) meta-analyses, one on neuroimaging studies of auditory processing of lexical tones in tonal languages (17 studies), and the other on auditory processing of lexical information in non-tonal languages as a control analysis for comparison (15 studies). The lexical tone ALE analysis showed significant brain activations in bilateral inferior prefrontal regions, bilateral superior temporal regions and the right caudate, while the control ALE analysis showed significant cortical activity in the left inferior frontal gyrus and left temporo-parietal regions. However, we failed to obtain significant differences from the contrast analysis between two auditory conditions, which might be caused by the limited number of studies available for comparison. Although the current study lacks evidence to argue for a lexical tone specific activation pattern, our results provide clues and directions for future investigations on this topic, more sophisticated methods are needed to explore this question in more depth as well. PMID:28798670
Gilley, Phillip M; Sharma, Anu; Dorman, Michael F
Congenital deafness leads to atypical organization of the auditory nervous system. However, the extent to which auditory pathways reorganize during deafness is not well understood. We recorded cortical auditory evoked potentials in normal hearing children and in congenitally deaf children fitted with cochlear implants. High-density EEG and source modeling revealed principal activity from auditory cortex in normal hearing and early implanted children. However, children implanted after a critical period of seven years revealed activity from parietotemporal cortex in response to auditory stimulation, demonstrating reorganized cortical pathways. Reorganization of central auditory pathways is limited by the age at which implantation occurs, and may help explain the benefits and limitations of implantation in congenitally deaf children.
Fujimoto, Toshiro; Okumura, Eiichi; Kodabashi, Atsushi; Takeuchi, Kouzou; Otsubo, Toshiaki; Nakamura, Katsumi; Yatsushiro, Kazutaka; Sekine, Masaki; Kamiya, Shinichiro; Shimooki, Susumu; Tamura, Toshiyo
We studied sex-related differences in gamma oscillation during an auditory oddball task, using magnetoencephalography and electroencephalography assessment of imaginary coherence (IC). We obtained a statistical source map of event-related desynchronization (ERD) / event-related synchronization (ERS), and compared females and males regarding ERD / ERS. Based on the results, we chose respectively seed regions for IC determinations in low (30-50 Hz), mid (50-100 Hz) and high gamma (100-150 Hz) bands. In males, ERD was increased in the left posterior cingulate cortex (CGp) at 500 ms in the low gamma band, and in the right caudal anterior cingulate cortex (cACC) at 125 ms in the mid-gamma band. ERS was increased in the left rostral anterior cingulate cortex (rACC) at 375 ms in the high gamma band. We chose the CGp, cACC and rACC as seeds, and examined IC between the seed and certain target regions using the IC map. IC changes depended on the height of the gamma frequency and the time window in the gamma band. Although IC in the mid and high gamma bands did not show sex-specific differences, IC at 30-50 Hz in males was increased between the left rACC and the frontal, orbitofrontal, inferior temporal and fusiform target regions. Increased IC in males suggested that males may acomplish the task constructively, analysingly, emotionally, and by perfoming analysis, and that information processing was more complicated in the cortico-cortical circuit. On the other hand, females showed few differences in IC. Females planned the task with general attention and economical well-balanced processing, which was explained by the higher overall functional cortical connectivity. CGp, cACC and rACC were involved in sex differences in information processing and were likely related to differences in neuroanatomy, hormones and neurotransmitter systems. PMID:27708745
Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.
To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974
Wetherby, Amy Miller; And Others
The results showed that all the Ss had normal hearing on the monaural speech tests; however, there was indication of central auditory nervous system dysfunction in the language dominant hemisphere, inferred from the dichotic tests, for those Ss displaying echolalia. (Author)
Vanvooren, Sophie; Hofmann, Michael; Poelmans, Hanne; Ghesquière, Pol; Wouters, Jan
In the brain, the temporal analysis of many important auditory features relies on the synchronized firing of neurons to the auditory input rhythm. These so-called neural oscillations play a crucial role in sensory and cognitive processing and deviances in oscillatory activity have shown to be associated with neurodevelopmental disorders. Given the importance of neural auditory oscillations in normal and impaired sensory and cognitive functioning, there has been growing interest in their developmental trajectory from early childhood on. In the present study, neural auditory processing was investigated in typically developing young children (n = 40) and adults (n = 27). In all participants, auditory evoked theta, beta and gamma responses were recorded. The results of this study show maturational differences between children and adults in neural auditory processing at cortical as well as at brainstem level. Neural background noise at cortical level was shown to be higher in children compared to adults. In addition, higher theta response amplitudes were measured in children compared to adults. For beta and gamma rate modulations, different processing asymmetry patterns were observed between both age groups. The mean response phase was also shown to differ significantly between children and adults for all rates. Results suggest that cortical auditory processing of beta develops from a general processing pattern into a more specialized asymmetric processing preference over age. Moreover, the results indicate an enhancement of bilateral representation of monaural sound input at brainstem with age. A dissimilar efficiency of auditory signal transmission from brainstem to cortex along the auditory pathway between children and adults is suggested. These developmental differences might be due to both functional experience-dependent as well as anatomical changes. The findings of the present study offer important information about maturational differences between children
Carrillo-de-la-Peña, M T; Vallet, M; Pérez, M I; Gómez-Perretta, C
On the basis of recent evidence concerning the amplification of incoming stimulation in fibromyalgia (FM) patients, it has been proposed that a generalized hypervigilance of painful and nonpainful sensations may be at the root of this disorder. So far, research into this issue has been inconclusive, possibly owing to the lack of agreement as to the operational definition of "generalized hypervigilance" and to the lack of robust objective measures characterizing the sensory style of FM patients. In this study, we recorded auditory-evoked potentials (AEPs) elicited by tones of increasing intensity (60, 70, 80, 90, and 105 dB) in 27 female FM patients and 25 healthy controls. Fibromyalgia patients presented shorter N1 and P2 latencies and a stronger intensity dependence of their AEPs. Both results suggest that FM patients may be hypervigilant to sensory stimuli, especially when very loud tones are used. The most noteworthy difference between patients and control subjects is at the highest stimulus intensity, for which far more patients maintained increased N1-P2 amplitudes in relation to the 90-dB tones. The larger AEP amplitudes to the 105-dB tones suggest that defects in an inhibitory system protecting against overstimulation may be a crucial factor in the pathophysiology of FM. Because a stronger loudness dependence of AEPs has been related to weak serotonergic transmission, it is hypothesized that for many FM patients deficient inhibition of the response to noxious and intense auditory stimuli may be due to a serotonergic deficit. The study of auditory-evoked potentials in response to tones of increasing intensity in FM patients may help to clarify the pathophysiology of this disorder, especially regarding the role of inhibition deficits involving serotonergic dysfunction, and may be a useful tool to guide the pharmacologic treatment of FM patients.
Johnson, Jeffrey S.; Yin, Pingbo; O'Connor, Kevin N.
Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1. PMID:22422997
Kwon, Myoung Soo; Huotilainen, Minna; Shestakova, Anna; Kujala, Teija; Näätänen, Risto; Hämäläinen, Heikki
We investigated the effect of mobile phone use on the auditory sensory memory in children. Auditory event-related potentials (ERPs), P1, N2, mismatch negativity (MMN), and P3a, were recorded from 17 children, aged 11-12 years, in the recently developed multi-feature paradigm. This paradigm allows one to determine the neural change-detection profile consisting of several different types of acoustic changes. During the recording, an ordinary GSM (Global System for Mobile Communications) mobile phone emitting 902 MHz (pulsed at 217 Hz) electromagnetic field (EMF) was placed on the ear, over the left or right temporal area (SAR(1g) = 1.14 W/kg, SAR(10g) = 0.82 W/kg, peak value = 1.21 W/kg). The EMF was either on or off in a single-blind manner. We found that a short exposure (two 6 min blocks for each side) to mobile phone EMF has no statistically significant effects on the neural change-detection profile measured with the MMN. Furthermore, the multi-feature paradigm was shown to be well suited for studies of perception accuracy and sensory memory in children. However, it should be noted that the present study only had sufficient statistical power to detect a large effect size. (c) 2009 Wiley-Liss, Inc.
Aldonate, J.; Mercuri, C.; Reta, J.; Biurrun, J.; Bonell, C.; Gentiletti, G.; Escobar, S.; Acevedo, R.
Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory.
Bares, Martin; Rektor, Ivan; Kanovský, Petr; Streitová, Hana
This study concerned sensory processing (post-stimulus late evoked potential components) in different parts of the human brain as related to a motor task (hand movement) in a cognitive paradigm (Contingent Negative Variation). The focus of the study was on the time and space distribution of middle and late post-stimulus evoked potential (EP) components, and on the processing of sensory information in the subcortical-cortical networks. Stereoelectroencephalography (SEEG) recordings of the contingent negative variation (CNV) in an audio-visual paradigm with a motor task were taken from 30 patients (27 patients with drug-resistant epilepsy; 3 patients with chronic thalamic pain). The intracerebral recordings were taken from 337 cortical sites (primary sensorimotor area (SM1); supplementary motor area (SMA); the cingulate gyrus; the orbitofrontal, premotor and dorsolateral prefrontal cortices; the temporal cortex, including the amygdalohippocampal complex; the parietooccipital lobes; and the insula) and from subcortical structures (the basal ganglia and the posterior thalamus). The concurrent scalp recordings were obtained from 3 patients in the thalamic group. In 4 patients in the epilepsy group, scalp recordings were taken separately from the SEEG procedure. The middle and long latency evoked potentials following an auditory warning (S1) and a visual imperative (S2) stimuli were analyzed. The occurrences of EPs were studied in two time windows (200-300 ms; and over 300 ms) following S1 and S2. Following S1, a high frequency of EP with latencies over 200 ms was observed in the primary sensorimotor area, the supplementary motor area, the premotor cortex, the orbitofrontal cortex, the cingulate gyrus, some parts of the temporal lobe, the basal ganglia, the insula, and the posterior thalamus. Following S2, a high frequency of EP in both of the time windows over 200 ms was observed in the SM1, the SMA, the premotor and dorsolateral prefrontal cortex, the orbitofrontal
Butler, Blake E; Lomber, Stephen G
The absence of auditory input, particularly during development, causes widespread changes in the structure and function of the auditory system, extending from peripheral structures into auditory cortex. In humans, the consequences of these changes are far-reaching and often include detriments to language acquisition, and associated psychosocial issues. Much of what is currently known about the nature of deafness-related changes to auditory structures comes from studies of congenitally deaf or early-deafened animal models. Fortunately, the mammalian auditory system shows a high degree of preservation among species, allowing for generalization from these models to the human auditory system. This review begins with a comparison of common methods used to obtain deaf animal models, highlighting the specific advantages and anatomical consequences of each. Some consideration is also given to the effectiveness of methods used to measure hearing loss during and following deafening procedures. The structural and functional consequences of congenital and early-onset deafness have been examined across a variety of mammals. This review attempts to summarize these changes, which often involve alteration of hair cells and supporting cells in the cochleae, and anatomical and physiological changes that extend through subcortical structures and into cortex. The nature of these changes is discussed, and the impacts to neural processing are addressed. Finally, long-term changes in cortical structures are discussed, with a focus on the presence or absence of cross-modal plasticity. In addition to being of interest to our understanding of multisensory processing, these changes also have important implications for the use of assistive devices such as cochlear implants.
Butler, Blake E.; Lomber, Stephen G.
The absence of auditory input, particularly during development, causes widespread changes in the structure and function of the auditory system, extending from peripheral structures into auditory cortex. In humans, the consequences of these changes are far-reaching and often include detriments to language acquisition, and associated psychosocial issues. Much of what is currently known about the nature of deafness-related changes to auditory structures comes from studies of congenitally deaf or early-deafened animal models. Fortunately, the mammalian auditory system shows a high degree of preservation among species, allowing for generalization from these models to the human auditory system. This review begins with a comparison of common methods used to obtain deaf animal models, highlighting the specific advantages and anatomical consequences of each. Some consideration is also given to the effectiveness of methods used to measure hearing loss during and following deafening procedures. The structural and functional consequences of congenital and early-onset deafness have been examined across a variety of mammals. This review attempts to summarize these changes, which often involve alteration of hair cells and supporting cells in the cochleae, and anatomical and physiological changes that extend through subcortical structures and into cortex. The nature of these changes is discussed, and the impacts to neural processing are addressed. Finally, long-term changes in cortical structures are discussed, with a focus on the presence or absence of cross-modal plasticity. In addition to being of interest to our understanding of multisensory processing, these changes also have important implications for the use of assistive devices such as cochlear implants. PMID:24324409
Milner, Rafał; Rusiniak, Mateusz; Wolak, Tomasz; Piatkowska-Janko, Ewa; Naumczyk, Patrycja; Bogorodzki, Piotr; Senderski, Andrzej; Ganc, Małgorzata; Skarzyński, Henryk
Processing of auditory information in central nervous system bases on the series of quickly occurring neural processes that cannot be separately monitored using only the fMRI registration. Simultaneous recording of the auditory evoked potentials, characterized by good temporal resolution, and the functional magnetic resonance imaging with excellent spatial resolution allows studying higher auditory functions with precision both in time and space. was to implement the simultaneous AEP-fMRI recordings method for the investigation of information processing at different levels of central auditory system. Five healthy volunteers, aged 22-35 years, participated in the experiment. The study was performed using high-field (3T) MR scanner from Siemens and 64-channel electrophysiological system Neuroscan from Compumedics. Auditory evoked potentials generated by acoustic stimuli (standard and deviant tones) were registered using modified odd-ball procedure. Functional magnetic resonance recordings were performed using sparse acquisition paradigm. The results of electrophysiological registrations have been worked out by determining voltage distributions of AEP on skull and modeling their bioelectrical intracerebral generators (dipoles). FMRI activations were determined on the basis of deviant to standard and standard to deviant functional contrasts. Results obtained from electrophysiological studies have been integrated with functional outcomes. Morphology, amplitude, latency and voltage distribution of auditory evoked potentials (P1, N1, P2) to standard stimuli presented during simultaneous AEP-fMRI registrations were very similar to the responses obtained outside scanner room. Significant fMRI activations to standard stimuli were found mainly in the auditory cortex. Activations in these regions corresponded with N1 wave dipoles modeled based on auditory potentials generated by standard tones. Auditory evoked potentials to deviant stimuli were recorded only outside the MRI
Walsh, Timothy; Demkowicz, Leszek; Charles, Richard
In this paper the response of the external auditory system to acoustical waves of varying frequencies and angles of incidence is computed using a boundary element method. The resonance patterns of both the ear canal and the concha are computed and compared with experimental data. Specialized numerical algorithms are developed that allow for the efficient computation of the eardrum pressures. In contrast to previous results in the literature that consider only the ``blocked meatus'' configuration, in this work the simulations are conducted on a boundary element mesh that includes both the external head/ear geometry, as well as the ear canal and eardrum. The simulation technology developed in this work is intended to demonstrate the utility of numerical analysis in studying physical phenomena related to the external auditory system. Later work could extend this towards simulating in situ hearing aids, and possibly using the simulations as a tool for optimizing hearing aid technologies for particular individuals.
Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Stebbings, Kevin A.; Lesicko, Alexandria M.H.; Llano, Daniel A.
We live in a world imbued with a rich mixture of complex sounds. Successful acoustic communication requires the ability to extract meaning from those sounds, even when degraded. One strategy used by the auditory system is to harness high-level contextual cues to modulate the perception of incoming sounds. An ideal substrate for this process is the massive set of top-down projections emanating from virtually every level of the auditory system. In this review, we provide a molecular and circuit-level description of one of the largest of these pathways: the auditory corticocollicular pathway. While its functional role remains to be fully elucidated, activation of this projection system can rapidly and profoundly change the tuning of neurons in the inferior colliculus. Several specific issues are reviewed. First, we describe the complex heterogeneous anatomical organization of the corticocollicular pathway, with particular emphasis on the topography of the pathway. We also review the laminar origin of the corticocollicular projection and discuss known physiological and morphological differences between subsets of corticocollicular cells. Finally, we discuss recent findings about the molecular micro-organization of the inferior colliculus and how it interfaces with corticocollicular termination patterns. Given the assortment of molecular tools now available to the investigator, it is hoped that his review will help guide future research on the role of this pathway in normal hearing. PMID:24911237
Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral
Schneider, David M; Mooney, Richard
In the auditory system, corollary discharge signals are theorized to facilitate normal hearing and the learning of acoustic behaviors, including speech and music. Despite clear evidence of corollary discharge signals in the auditory cortex and their presumed importance for hearing and auditory-guided motor learning, the circuitry and function of corollary discharge signals in the auditory cortex are not well described. In this review, we focus on recent developments in the mouse and songbird that provide insights into the circuitry that transmits corollary discharge signals to the auditory system and the function of these signals in the context of hearing and vocal learning.
Lomber, Stephen G; Malhotra, Shveta
Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.
Da Costa, Nuno M. A.; Girardin, Cyrille C.; Naaman, Shmuel; Omer, David B.; Ruesch, Elisha; Grinvald, Amiram; Douglas, Rodney J.
Pyramidal cells in layers 2 and 3 of the neocortex of many species collectively form a clustered system of lateral axonal projections (the superficial patch system—Lund JS, Angelucci A, Bressloff PC. 2003. Anatomical substrates for functional columns in macaque monkey primary visual cortex. Cereb Cortex. 13:15–24. or daisy architecture—Douglas RJ, Martin KAC. 2004. Neuronal circuits of the neocortex. Annu Rev Neurosci. 27:419–451.), but the function performed by this general feature of the cortical architecture remains obscure. By comparing the spatial configuration of labeled patches with the configuration of responses to drifting grating stimuli, we found the spatial organizations both of the patch system and of the cortical response to be highly conserved between cat and monkey primary visual cortex. More importantly, the configuration of the superficial patch system is directly reflected in the arrangement of function across monkey primary visual cortex. Our results indicate a close relationship between the structure of the superficial patch system and cortical responses encoding a single value across the surface of visual cortex (self-consistent states). This relationship is consistent with the spontaneous emergence of orientation response–like activity patterns during ongoing cortical activity (Kenet T, Bibitchkov D, Tsodyks M, Grinvald A, Arieli A. 2003. Spontaneously emerging cortical representations of visual attributes. Nature. 425:954–956.). We conclude that the superficial patch system is the physical encoding of self-consistent cortical states, and that a set of concurrently labeled patches participate in a network of mutually consistent representations of cortical input. PMID:21383233
Pires, Mayra Monteiro; Mota, Mailce Borges; Pinheiro, Maria Madalena Canina
This study aims to investigate working, declarative, and procedural memory in children with (central) auditory processing disorder who showed poor phonological awareness. Thirty 9- and 10-year-old children participated in the study and were distributed into two groups: a control group consisting of 15 children with typical development, and an experimental group consisting of 15 children with (central) auditory processing disorder who were classified according to three behavioral tests and who showed poor phonological awareness in the CONFIAS test battery. The memory systems were assessed through the adapted tests in the program E-PRIME 2.0. The working memory was assessed by the Working Memory Test Battery for Children (WMTB-C), whereas the declarative memory was assessed by a picture-naming test and the procedural memory was assessed by means of a morphosyntactic processing test. The results showed that, when compared to the control group, children with poor phonological awareness scored lower in the working, declarative, and procedural memory tasks. The results of this study suggest that in children with (central) auditory processing disorder, phonological awareness is associated with the analyzed memory systems.
Redies, H; Brandner, S; Creutzfeldt, O D
We investigated the projection from the medial geniculate body (MG) to the tonotopic fields (the anterior field A, the dorsocaudal field DC, the small field S) and to the nontonotopic ventrocaudal belt in the auditory cortex of the guinea pig. The auditory fields were first delimited in electrophysiological experiments with microelectrode mapping techniques. Then, small quantities of horseradish peroxidase (HRP) and/or fluorescent retrograde tracers were injected into the sites of interest, and the thalamus was checked for labeled cells. The anterior field A receives its main thalamic input from the ventral nucleus of the MG (MGv). The projection is topographically organized. Roughly, the caudal part of the MGv innervates the rostral part of field A and vice versa. After injection of tracer into low or medium best-frequency sites in A, we also found a topographic gradient along the isofrequency contours: the dorsal (ventral) part of a cortical isofrequency strip receives afferents from the rostral (caudal) portions of the corresponding thalamic isofrequency band. However, it is not so obvious whether such a gradient exists also in the high-frequency part of the projection. A second, weaker projection to field A originates in a magnocellular nucleus that is situated caudomedially in the MG and was therefore named the caudomedial nucleus. The dorsocaudal field DC receives input from the same nuclei as the anterior field, but the location of the labeled cells in the MGv is different. This was demonstrated by injection of different tracers into sites with like best frequencies in fields A and DC, respectively. After injection of HRP into the 1-2-kHz isofrequency strip in field A and injection of Nuclear Yellow (NY) into the 1-2-kHz site in field DC, the labeled cells in the MGv form one continuous array that runs from caudal to rostral over the whole extent of the MGv. The anterior part of this array consists of NY-labeled cells; i.e., it projects to field DC. The
Jenson, David; Harkrider, Ashley W.; Thornton, David; Bowers, Andrew L.; Saltuklaroglu, Tim
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required “active” discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral “auditory” alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique. PMID
Lahav, Amir; Skoe, Erika
The intrauterine environment allows the fetus to begin hearing low-frequency sounds in a protected fashion, ensuring initial optimal development of the peripheral and central auditory system. However, the auditory nursery provided by the womb vanishes once the preterm newborn enters the high-frequency (HF) noisy environment of the neonatal intensive care unit (NICU). The present article draws a concerning line between auditory system development and HF noise in the NICU, which we argue is not necessarily conducive to fostering this development. Overexposure to HF noise during critical periods disrupts the functional organization of auditory cortical circuits. As a result, we theorize that the ability to tune out noise and extract acoustic information in a noisy environment may be impaired, leading to increased risks for a variety of auditory, language, and attention disorders. Additionally, HF noise in the NICU often masks human speech sounds, further limiting quality exposure to linguistic stimuli. Understanding the impact of the sound environment on the developing auditory system is an important first step in meeting the developmental demands of preterm newborns undergoing intensive care.
Clarkson, Cheryl; Herrero-Turrión, M. Javier; Merchán, Miguel A.
The cortico-collicular pathway is a bilateral excitatory projection from the cortex to the inferior colliculus (IC). It is asymmetric and predominantly ipsilateral. Using microarrays and RT-qPCR we analyzed changes in gene expression in the IC after unilateral lesions of the auditory cortex, comparing the ICs ipsi- and contralateral to the lesioned side. At 15 days after surgery there were mainly changes in gene expression in the IC ipsilateral to the lesion. Regulation primarily involved inflammatory cascade genes, suggesting a direct effect of degeneration rather than a neuronal plastic reorganization. Ninety days after the cortical lesion the ipsilateral IC showed a significant up-regulation of genes involved in apoptosis and axonal regeneration combined with a down-regulation of genes involved in neurotransmission, synaptic growth, and gap junction assembly. In contrast, the contralateral IC at 90 days post-lesion showed an up-regulation in genes primarily related to neurotransmission, cell proliferation, and synaptic growth. There was also a down-regulation in autophagy and neuroprotection genes. These findings suggest that the reorganization in the IC after descending pathway deafferentation is a long-term process involving extensive changes in gene expression regulation. Regulated genes are involved in many different neuronal functions, and the number and gene rearrangement profile seems to depend on the density of loss of the auditory cortical inputs. PMID:23233834
Neuheiser, Anke; Lenarz, Minoo; Reuter, Guenter; Calixto, Roger; Nolte, Ingo; Lenarz, Thomas; Lim, Hubert H
The auditory midbrain implant (AMI), which consists of a single shank array designed for stimulation within the central nucleus of the inferior colliculus (ICC), has been developed for deaf patients who cannot benefit from a cochlear implant. Currently, performance levels in clinical trials for the AMI are far from those achieved by the cochlear implant and vary dramatically across patients, in part due to stimulation location effects. As an initial step towards improving the AMI, we investigated how stimulation of different regions along the isofrequency domain of the ICC as well as varying pulse phase durations and levels affected auditory cortical activity in anesthetized guinea pigs. This study was motivated by the need to determine in which region to implant the single shank array within a three-dimensional ICC structure and what stimulus parameters to use in patients. Our findings indicate that complex and unfavorable cortical activation properties are elicited by stimulation of caudal-dorsal ICC regions with the AMI array. Our results also confirm the existence of different functional regions along the isofrequency domain of the ICC (i.e., a caudal-dorsal and a rostral-ventral region), which has been traditionally unclassified. Based on our study as well as previous animal and human AMI findings, we may need to deliver more complex stimuli than currently used in the AMI patients to effectively activate the caudal ICC or ensure that the single shank AMI is only implanted into a rostral-ventral ICC region in future patients.
Ackermann, H; Hertrich, I; Mathiak, K; Lutzenberger, W
Humans show a stronger cortical representation of auditory input at the opposite hemisphere each. To specify the temporal aspects of this contralaterality effect within the domain of speech stimuli, the present study recorded a series of evoked magnetic fields (M50, M100, mismatch field) subsequent to monaural application of stop consonant-vowel syllables using whole-head magnetoencephalography (MEG). The M50 components exhibited a skewed shape of cross-symmetrical distribution in terms of an initial maximum peak succeeded by a knot over the contralateral and a reversed pattern over the ipsilateral temporal lobe. Most presumably, this pattern of evoked fields reflects two distinct stages of central-auditory processing: (a) initial excitation of the larger contralateral and the smaller ipsilateral projection area of the stimulated ear; (b) subsequent transcallosal activation of the residual neurons, i.e. the targets of the non-stimulated ear, at either side. Previous studies using non-speech stimuli found contralaterality of central-auditory processing to extend to the M100 field. In contrast, a larger amplitude of ipsilateral M100 as compared to the respective opposite deflection emerged after stimulation of either ear. Finally, the computed magnetic analogues of mismatch negativity failed any significant laterality effects. These data provide first evidence for a distinct pattern of hemispheric differences at the level of the M50/M100 complex subsequent to monaural application of speech stimuli.
Saul, Sara M.; Brzezinski, Joseph A.; Altschuler, Richard A.; Shore, Susan E.; Rudolph, Dellaney D.; Kabara, Lisa L.; Halsey, Karin E.; Hufnagel, Robert B.; Zhou, Jianxun; Dolan, David F.; Glaser, Tom
The basic helix-loop-helix (bHLH) transcription factor Math5 (Atoh7) is required for retinal ganglion cell (RGC) and optic nerve development. Using Math5-lacZ knockout mice, we have identified an additional expression domain for Math5 outside the eye, in functionally connected structures of the central auditory system. In the adult hindbrain, the cytoplasmic Math5-lacZ reporter is expressed within the ventral cochlear nucleus (VCN), in a subpopulation of neurons that project to medial nucleus of the trapezoid body (MNTB), lateral superior olive (LSO), and lateral lemniscus (LL). These cells were identified as globular and small spherical bushy cells based on their morphology, abundance, distribution within the cochlear nucleus (CN), co-expression of Kv1.1, Kv3.1b and Kcnq4 potassium channels, and projection patterns within the auditory brainstem. Math5-lacZ is also expressed by cochlear root neurons in the auditory nerve. During embryonic development, Math5-lacZ was detected in precursor cells emerging from the caudal rhombic lip from embryonic day (E)12 onwards, consistent with the time course of CN neurogenesis. These cells co-express MafB, Math1 and Math5 and are post-mitotic. Math5 expression in the CN was verified by mRNA in situ hybridization, and the identity of positive neurons was confirmed morphologically using a Math5-Cre BAC transgene with an alkaline phosphatase reporter. The hindbrains of Math5 mutants appear grossly normal, with the exception of the CN. Although overall CN dimensions are unchanged, the lacZ positive cells are significantly smaller in Math5 −/− mice compared to Math5 +/− mice, suggesting these neurons may function abnormally. The Auditory Brainstem Response (ABR) of Math5 mutants was evaluated in a BALB/cJ congenic background. ABR thresholds of Math5 −/− mice were similar to those of wild-type and heterozygous mice, but the interpeak latencies for Peaks II-IV were significantly altered. These temporal changes are consistent
Mayhew, Stephen D; Ostwald, Dirk; Porcaro, Camillo; Bagshaw, Andrew P
The human brain is continually, dynamically active and spontaneous fluctuations in this activity play a functional role in affecting both behavioural and neuronal responses. However, the mechanisms through which this occurs remain poorly understood. Simultaneous EEG-fMRI is a promising technique to study how spontaneous activity modulates the brain's response to stimulation, as temporal indices of ongoing cortical excitability can be integrated with spatially localised evoked responses. Here we demonstrate an interaction between the ongoing power of the electrophysiological alpha oscillation and the magnitude of both positive (PBR) and negative (NBR) fMRI responses to two contrasts of visual checkerboard reversal. Furthermore, the amplitude of pre-stimulus EEG alpha-power significantly modulated the amplitude and shape of subsequent PBR and NBR to the visual stimulus. A nonlinear reduction of visual PBR and an enhancement of auditory NBR and default-mode network NBR were observed in trials preceded by high alpha-power. These modulated areas formed a functionally connected network during a separate resting-state recording. Our findings suggest that the "baseline" state of the brain exhibits considerable trial-to-trial variability which arises from fluctuations in the balance of cortical inhibition/excitation that are represented by respective increases/decreases in the power of the EEG alpha oscillation. The consequence of this spontaneous electrophysiological variability is modulated amplitudes of both PBR and NBR to stimulation. Fluctuations in alpha-power may subserve a functional relationship in the visual-auditory network, acting as mediator for both short and long-range cortical inhibition, the strength of which is represented in part by NBR.
Nash-Kille, Amy; Sharma, Anu
Although brainstem dys-synchrony is a hallmark of children with auditory neuropathy spectrum disorder (ANSD), little is known about how the lack of neural synchrony manifests at more central levels. We used time-frequency single-trial EEG analyses (i.e., inter-trial coherence; ITC), to examine cortical phase synchrony in children with normal hearing (NH), sensorineural hearing loss (SNHL) and ANSD. Single trial time-frequency analyses were performed on cortical auditory evoked responses from 41 NH children, 91 children with ANSD and 50 children with SNHL. The latter two groups included children who received intervention via hearing aids and cochlear implants. ITC measures were compared between groups as a function of hearing loss, intervention type, and cortical maturational status. In children with SNHL, ITC decreased as severity of hearing loss increased. Children with ANSD revealed lower levels of ITC relative to children with NH or SNHL, regardless of intervention. Children with ANSD who received cochlear implants showed significant improvements in ITC with increasing experience with their implants. Cortical phase coherence is significantly reduced as a result of both severe-to-profound SNHL and ANSD. ITC provides a window into the brain oscillations underlying the averaged cortical auditory evoked response. Our results provide a first description of deficits in cortical phase synchrony in children with SNHL and ANSD. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Nash-Kille, Amy; Sharma, Anu
Objective Although brainstem dys-synchrony is a hallmark of children with auditory neuropathy spectrum disorder (ANSD), little is known about how the lack of neural synchrony manifests at more central levels. We used time-frequency single-trial EEG analyses (i.e., inter-trial coherence; ITC), to examine cortical phase synchrony in children with normal hearing (NH), sensorineural hearing loss (SNHL) and ANSD. Methods Single trial time-frequency analyses were performed on cortical auditory evoked responses from 41 NH children, 91 children with ANSD and 50 children with SNHL. The latter two groups included children who received intervention via hearing aids and cochlear implants. ITC measures were compared between groups as a function of hearing loss, intervention type, and cortical maturational status. Results In children with SNHL, ITC decreased as severity of hearing loss increased. Children with ANSD revealed lower levels of ITC relative to children with NH or SNHL, regardless of intervention. Children with ANSD who received cochlear implants showed significant improvements in ITC with increasing experience with their implants. Conclusions Cortical phase coherence is significantly reduced as a result of both severe-to-profound SNHL and ANSD. Significance ITC provides a window into the brain oscillations underlying the averaged cortical auditory evoked response. Our results provide a first description of deficits in cortical phase synchrony in children with SNHL and ANSD. PMID:24360131
Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang
Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.
Sun, W.; Lu, J.; Stolzberg, D.; Gray, L.; Deng, A.; Lobarinas, E.; Salvi, R. J.
High doses of salicylate, the anti-inflammatory component of aspirin, induce transient tinnitus and hearing loss. Systemic injection of 250 mg/kg of salicylate, a dose that reliably induces tinnitus in rats, significantly reduced the sound evoked output of the rat cochlea. Paradoxically, salicylate significantly increased the amplitude of the sound-evoked field potential from the auditory cortex (AC) of conscious rats, but not the inferior colliculus (IC). When rats were anesthetized with isoflurane, which increases GABA-mediated inhibition, the salicylate-induced AC amplitude enhancement was abolished, whereas ketamine, which blocks N-methyl-d-aspartate receptors, further increased the salicylate-induced AC amplitude enhancement. Direct application of salicylate to the cochlea, however, reduced the response amplitude of the cochlea, IC and AC, suggesting the AC amplitude enhancement induced by systemic injection of salicylate does not originate from the cochlea. To identify a behavioral correlate of the salicylate-induced AC enhancement, the acoustic startle response was measured before and after salicylate treatment. Salicylate significantly increased the amplitude of the startle response. Collectively, these results suggest that high doses of salicylate increase the gain of the central auditory system, presumably by down-regulating GABA-mediated inhibition, leading to an exaggerated acoustic startle response. The enhanced startle response may be the behavioral correlate of hyperacusis that often accompanies tinnitus and hearing loss. Published by Elsevier Ltd on behalf of IBRO. PMID:19154777
Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD) is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked. PMID:25401133
Leo van Hemmen, J.; Longtin, André; Vollmayr, Andreas N.
Quite often a response to some input with a specific frequency ν○ can be described through a sequence of discrete events. Here, we study the synchrony vector, whose length stands for the vector strength, and in doing so focus on neuronal response in terms of spike times. The latter are supposed to be given by experiment. Instead of singling out the stimulus frequency ν○ we study the synchrony vector as a function of the real frequency variable ν. Its length turns out to be a resonating vector strength in that it shows clear maxima in the neighborhood of ν○ and multiples thereof, hence, allowing an easy way of determining response frequencies. We study this "resonating" vector strength for two concrete but rather different cases, viz., a specific midbrain neuron in the auditory system of cat and a primary detector neuron belonging to the electric sense of the wave-type electric fish Apteronotus leptorhynchus. We show that the resonating vector strength always performs a clear resonance correlated with the phase locking that it quantifies. We analyze the influence of noise and demonstrate how well the resonance associated with maximal vector strength indicates the dominant stimulus frequency. Furthermore, we exhibit how one can obtain a specific phase associated with, for instance, a delay in auditory analysis.
Navarro Cebrian, Ana; Janata, Petr
The influence of different memory systems and associated attentional processes on the acuity of auditory images, formed for the purpose of making intonation judgments, was examined across three experiments using three different task types (cued-attention, imagery, and two-tone discrimination). In experiment 1 the influence of implicit long-term memory for musical scale structure was manipulated by varying the scale degree (leading tone versus tonic) of the probe note about which a judgment had to be made. In experiments 2 and 3 the ability of short-term absolute pitch knowledge to develop was manipulated by presenting blocks of trials in the same key or in seven different keys. The acuity of auditory images depended on all of these manipulations. Within individual listeners, thresholds in the two-tone discrimination and cued-attention conditions were closely related. In many listeners, cued-attention thresholds were similar to thresholds in the imagery condition, and depended on the amount of training individual listeners had in playing a musical instrument. The results indicate that mental images formed at a sensory/cognitive interface for the purpose of making perceptual decisions are highly malleable.
van Hemmen, J Leo; Longtin, André; Vollmayr, Andreas N
Quite often a response to some input with a specific frequency ν(○) can be described through a sequence of discrete events. Here, we study the synchrony vector, whose length stands for the vector strength, and in doing so focus on neuronal response in terms of spike times. The latter are supposed to be given by experiment. Instead of singling out the stimulus frequency ν(○) we study the synchrony vector as a function of the real frequency variable ν. Its length turns out to be a resonating vector strength in that it shows clear maxima in the neighborhood of ν(○) and multiples thereof, hence, allowing an easy way of determining response frequencies. We study this "resonating" vector strength for two concrete but rather different cases, viz., a specific midbrain neuron in the auditory system of cat and a primary detector neuron belonging to the electric sense of the wave-type electric fish Apteronotus leptorhynchus. We show that the resonating vector strength always performs a clear resonance correlated with the phase locking that it quantifies. We analyze the influence of noise and demonstrate how well the resonance associated with maximal vector strength indicates the dominant stimulus frequency. Furthermore, we exhibit how one can obtain a specific phase associated with, for instance, a delay in auditory analysis.
Umat, Cila; Mukari, Siti Z; Ezan, Nurul F; Din, Normah C
To examine the changes in the short-term auditory memory following the use of frequency-modulated (FM) system in children with suspected auditory processing disorders (APDs), and also to compare the advantages of bilateral over unilateral FM fitting. This longitudinal study involved 53 children from Sekolah Kebangsaan Jalan Kuantan 2, Kuala Lumpur, Malaysia who fulfilled the inclusion criteria. The study was conducted from September 2007 to October 2008 in the Department of Audiology and Speech Sciences, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia. The children's age was between 7-10 years old, and they were assigned into 3 groups: 15 in the control group (not fitted with FM); 19 in the unilateral; and 19 in the bilateral FM-fitting group. Subjects wore the FM system during school time for 12 weeks. Their working memory (WM), best learning (BL), and retention of information (ROI) were measured using the Rey Auditory Verbal Learning Test at pre-fitting, post (after 12 weeks of FM usage), and at long term (one year after the usage of FM system ended). There were significant differences in the mean WM (p=0.001), BL (p=0.019), and ROI (p=0.005) scores at the different measurement times, in which the mean scores at long-term were consistently higher than at pre-fitting, despite similar performances at the baseline (p>0.05). There was no significant difference in performance between unilateral- and bilateral-fitting groups. The use of FM might give a long-term effect on improving selected short-term auditory memories of some children with suspected APDs. One may not need to use 2 FM receivers to receive advantages on auditory memory performance.
Malhotra, Shveta; Lomber, Stephen G
Although the contributions of primary auditory cortex (AI) to sound localization have been extensively studied in a large number of mammals, little is known of the contributions of nonprimary auditory cortex to sound localization. Therefore the purpose of this study was to examine the contributions of both primary and all the recognized regions of acoustically responsive nonprimary auditory cortex to sound localization during both bilateral and unilateral reversible deactivation. The cats learned to make an orienting response (head movement and approach) to a 100-ms broad-band noise stimulus emitted from a central speaker or one of 12 peripheral sites (located in front of the animal, from left 90 degrees to right 90 degrees , at 15 degrees intervals) along the horizontal plane after attending to a central visual stimulus. Twenty-one cats had one or two bilateral pairs of cryoloops chronically implanted over one of ten regions of auditory cortex. We examined AI [which included the dorsal zone (DZ)], the three other tonotopic fields [anterior auditory field (AAF), posterior auditory field (PAF), ventral posterior auditory field (VPAF)], as well as six nontonotopic regions that included second auditory cortex (AII), the anterior ectosylvian sulcus (AES), the insular (IN) region, the temporal (T) region [which included the ventral auditory field (VAF)], the dorsal posterior ectosylvian (dPE) gyrus [which included the intermediate posterior ectosylvian (iPE) gyrus], and the ventral posterior ectosylvian (vPE) gyrus. In accord with earlier studies, unilateral deactivation of AI/DZ caused sound localization deficits in the contralateral field. Bilateral deactivation of AI/DZ resulted in bilateral sound localization deficits throughout the 180 degrees field examined. Of the three other tonotopically organized fields, only deactivation of PAF resulted in sound localization deficits. These deficits were virtually identical to the unilateral and bilateral deactivation results
Bruder, Jennifer; Leppänen, Paavo H T; Bartling, Jürgen; Csépe, Valéria; Démonet, Jean-Francois; Schulte-Körne, Gerd
The present study examined cortical auditory evoked related potentials (AERPs) for the P1-N250 and MMN components in children 9 years of age. The first goal was to investigate whether AERPs respond differentially to vowels and complex tones, and the second goal was to explore how prototypical language formant structures might be reflected in these early auditory processing stages. Stimuli were two synthetic within-category vowels (/y/), one of which was preferred by adult German listeners ("prototypical-vowel"), and analogous complex tones. P1 strongly distinguished vowels from tones, revealing larger amplitudes for the more difficult to discriminate but phonetically richer vowel stimuli. Prototypical language phoneme status did not reliably affect AERPs; however P1 amplitudes elicited by the prototypical-vowel correlated robustly with the ability to correctly identify two prototypical-vowels presented in succession as "same" (r=-0.70) and word reading fluency (r=-0.63). These negative correlations suggest that smaller P1 amplitudes elicited by the prototypical-vowel predict enhanced accuracy when judging prototypical-vowel "sameness" and increased word reading speed. N250 and MMN did not differentiate between vowels and tones and showed no correlations to behavioural measures. Copyright © 2010 Elsevier B.V. All rights reserved.
Boatright-Horowitz, Seth Stuart
Larval ranid amphibians undergo metamorphic development, during which they transform from strictly aquatic larvae to partly terrestrial adults. A series of anatomical and electrophysiological experiments were conducted to examine the development of the central auditory system and acoustic conduction pathways across metamorphosis. Gross anatomical dissection and coronal sections of tadpoles indicated that there were no peripheral structures overlying the oval window (OW) in pre- and early prometamorphic tadpoles, while the OWs of late prometamorphic animals were blocked by elements of the forming opercularis system. The OWs of metamorphic climax tadpoles were connected via the opercularis muscle to the shoulder girdle, forming an extratympanic transduction pathway. Components of the tympanic pathway were not mature until after completion of metamorphosis. The bronchial columella, described by Witschi (1949) was observed in animals up to mid-metamorphic climax. Iontophoresis of horseradish peroxidase (HRP) into the torus semicircularis (TS) demonstrated changes in connectivity with other brainstem auditory nuclei across metamorphosis. Pre-and early prometamorphic tadpoles displayed stable, limited transport to the acoustic nucleus (AcN), and robust labeling of the anterior lateral line (LLa) superior olivary (SON) nuclei. Late prometamorphic tadpoles displayed highly reduced SON labeling and variable labeling of the LLa and AcN. Tadpoles in metamorphic climax showed a stage dependent increase in labeling of the SON and AcN, and loss of labeling in the LLa. Late metamorphic climax tadpoles and recently postmetamorphic froglets demonstrated adult-like connectivity. Multiunit recordings in the TS showed that pre- and early prometamorphic tadpoles demonstrated significant phase locking to periodic stimuli at modulation rates as high as 250 Hz, and relatively sharply tuned audiograms with best frequencies (BF) in the range of 2000-2500 Hz. Late prometamorphic tadpoles
Stress is a complex biological reaction common to all living organisms that allows them to adapt to their environments. Chronic stress alters the dendritic architecture and function of the limbic brain areas that affect memory, learning, and emotional processing. This review summarizes our research about chronic stress effects on the auditory system, providing the details of how we developed the main hypotheses that currently guide our research. The aims of our studies are to (1) determine how chronic stress impairs the dendritic morphology of the main nuclei of the rat auditory system, the inferior colliculus (auditory mesencephalon), the medial geniculate nucleus (auditory thalamus), and the primary auditory cortex; (2) correlate the anatomic alterations with the impairments of auditory fear learning; and (3) investigate how the stress-induced alterations in the rat limbic system may spread to nonlimbic areas, affecting specific sensory system, such as the auditory and olfactory systems, and complex cognitive functions, such as auditory attention. Finally, this article gives a new evolutionary approach to understanding the neurobiology of stress and the stress-related disorders.
Anderson, Lucy A.
High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the
Van der Stoep, N; Nijboer, T C W; Van der Stigchel, S
The decision about which location should be the goal of the next eye movement is known to be determined by the interaction between auditory and visual input. This interaction can be explained by the vector theory that states that each element (either visual or auditory) in a scene evokes a vector in the oculomotor system. These vectors determine the direction in which the eye movement is initiated. Because auditory input is lateralized and localizable in most studies, it is currently unclear how non-lateralized auditory input interacts with the vectors evoked by visual input. In the current study, we investigated the influence of a non-lateralized auditory non-target on saccade accuracy (saccade angle deviation from the target) and latency in a single-target condition in Experiment 1 and a double-target condition in Experiment 2. The visual targets in Experiment 2 were positioned in such a way that saccades on average landed in between the two targets (i.e., a global effect). There was no effect of the auditory input on saccade accuracy in the single-target condition, but auditory input did influence saccade accuracy in the double-target condition. In both experiments, saccade latency increased when auditory input accompanied the visual target(s). Together, these findings show that non-lateralized auditory input enhances all vectors evoked by visual input. The results will be discussed in terms of their possible neural substrates.
Guo, Qianqian; Li, Yuling; Fu, Xinxing; Liu, Hui; Chen, Jing; Meng, Chao; Long, Mo; Chen, Xueqing
The purpose of the current study was to evaluate the relationship between the presence or absence of cortical auditory evoked potentials (CAEPs) to speech stimuli and the performance of speech perception in Chinese pediatric recipients of the Nurotron(®) cochlear implant (CI).We also wanted to determine how the CAEPs might be used as an indicator for predicting early speech perception and could provide objective evidence for clinical applications of CAEPs. 23 pediatric unilateral CI recipients participated in this study. 15 males 8 females, and their ages at implantation ranged from 13 to 68 months, with a mean age of 36 months. CAEPs and Mandarin Early Speech Perception (MESP) tests were used to evaluate the audibility and speech perception of these CI users. The tests were administered at the first, second, third, and fourth year after the CI surgery. All the subjects demonstrated improvements in detection of speech sounds with CI. The percentages of participants who could detect all three stimuli were 26% (6/23) at first year, to 100% (23/23) at the fourth year post-implantation. The percentages of participants who passed the Category 6 of MESP were from 9% (2/23) at first year, to 91% (21/23) at the fourth year post-implantation. Significant correlations (p<0.05) were found between CAEP scores and MESP at the first, second, third year after the CI surgery. The multiple regression equation for prediction of MESP categories from CAEP scores and hearing ages was MESP=1.088+(0.504×CAEP score)+(0.964×hearing ages) (F=72.919, p<0.001, R(2)=0.621). The results of this study suggested that aided cortical assessment was a useful tool to evaluate the outcomes of cochlear implantation. Cortical outcomes had a significant positive relationship with the MESP, which predicted the early speech perception of CI recipients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Nakashima, K; Wang, Y; Shimoda, M; Shimoyama, R; Yokoyama, Y; Takahashi, K
The effects of sound on the responses in teh abductor pollicis brevis muscle after magnetic cortical stimulation and on the H-reflexes in the wrist and finger flexor muscles were examined. Magnetic cortical stimulation and electrical stimulation eliciting H-reflexes were conditioned by sound stimulation. This sound stimulation did not produce the electromyographic response by itself. In the control subjects, sound stimulation produced an increase of the motor responses after cortical stimulation at intervals of 100, 150, 200 and 250 ms. The increase was greater in the patients with Parkinson's disease (PD). In the control subjects, sound stimulation produced an increase of the H-reflexes at intervals of 50, 100, 150 and 200 ms. This H-reflex increase in the PD patients was less than in the normal subjects. The reticular system might play a role in the abnormal motor control system in PD patients.
Bacro, Thierry R H; Gebregziabher, Mulugeta; Ariail, Jennie
The literature reports that using Learning Recording Systems (LRS) is usually well received by students but that the pedagogical value of LRS in academic settings remains somewhat unclear. The primary aim of the current study is to document students' perceptions, actual pattern of usage, and impact of use of LRS on students' grade in a dental gross and neuroanatomy course. Other aims are to determine if students' learning preference correlated with final grades and to see if other factors like gender, age, overall academic score on the Dental Aptitude Test (DAT), lecture levels of difficulty, type of lecture, category of lecture, or teaching faculty could explain the impact, if any, of the use of LRS on the course final grade. No significant correlation was detected between the final grades and the variables studied except for a significant but modest correlation between final grades and the number of times the students accessed the lecture recordings (r=0.33 with P=0.01). Also, after adjusting for gender, age, learning style, and academic DAT, a significant interaction between auditory and average usage time was found for final grade (P=0.03). Students who classified themselves as auditory and who used the LRS on average for fewer than 10 minutes per access, scored an average final grade of 16.43 % higher than the nonauditory students using the LRS for the same amount of time per access. Based on these findings, implications for teaching are discussed and recommendations for use of LRS are proposed. Copyright © 2013 American Association of Anatomists.
Escera, Carles; Malmierca, Manuel S
In this account, we attempt to integrate two parallel, but thus far, separate lines of research on auditory novelty detection: (1) human studies of EEG recordings of the mismatch negativity (MMN), and (2) animal studies of single-neuron recordings of stimulus-specific adaptation (SSA). The studies demonstrating the existence of novelty neurons showing SSA at different levels along the auditory pathway's hierarchy, together with the recent results showing human auditory-evoked potential correlates of deviance detection at very short latencies, that is, at 20-40 ms from change onset, support the view that novelty detection is a key principle that governs the functional organization of the auditory system. Furthermore, the generation of the MMN recorded from the human scalp seems to involve a cascade of neuronal processing that occurs at different successive levels of the auditory system's hierarchy.
Gupta, Disha; Hill, N. Jeremy; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Schalk, Gerwin
Objective. Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. Approach. We study these questions using electrocorticographic signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (six for offline parameter optimization, six for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main results. Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelopes. These decoders were trained separately and executed simultaneously in real time. Significance. This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple
Hegerl, U; Gallinat, J; Mrowinski, D
Action-oriented personality traits such as sensation seeking, extraversion, and impulsivity have been related to a pronounced amplitude increase of auditory evoked scalp potentials with increasing stimulus intensity. Dipole source analysis represents a crucial methodological progress in this context, because overlapping subcomponents of the scalp potentials can be separated and can be related to their generating cortical structures. In a study on 40 healthy subjects, it was found that sensation seeking is clearly related to the auditory evoked response pattern (N1/P2-component, stimulus intensities: 60, 70, 80, 90, 100 dB SPL) of the superior temporal plane including primary auditory cortex, but not to that of secondary auditory areas in the lateral temporal cortex. These results support the concept that the serotonergic brain system, which is supposed to modulate sensory processing in primary auditory cortices, is an important factor underlying individual differences in sensation seeking.
İlhan, Barkın; VanRullen, Rufin
It has been previously demonstrated by our group that a visual stimulus made of dynamically changing luminance evokes an echo or reverberation at ∼10 Hz, lasting up to a second. In this study we aimed to reveal whether similar echoes also exist in the auditory modality. A dynamically changing auditory stimulus equivalent to the visual stimulus was designed and employed in two separate series of experiments, and the presence of reverberations was analyzed based on reverse correlations between stimulus sequences and EEG epochs. The first experiment directly compared visual and auditory stimuli: while previous findings of ∼10 Hz visual echoes were verified, no similar echo was found in the auditory modality regardless of frequency. In the second experiment, we tested if auditory sequences would influence the visual echoes when they were congruent or incongruent with the visual sequences. However, the results in that case similarly did not reveal any auditory echoes, nor any change in the characteristics of visual echoes as a function of audio-visual congruence. The negative findings from these experiments suggest that brain oscillations do not equivalently affect early sensory processes in the visual and auditory modalities, and that alpha (8–13 Hz) oscillations play a special role in vision. PMID:23145143
Lomas, Kathryn F; Greenwood, David R; Windmill, James F C; Jackson, Joseph C; Corfield, Jeremy; Parsons, Stuart
Weta possess typical Ensifera ears. Each ear comprises three functional parts: two equally sized tympanal membranes, an underlying system of modified tracheal chambers, and the auditory sensory organ, the crista acustica. This organ sits within an enclosed fluid-filled channel-previously presumed to be hemolymph. The role this channel plays in insect hearing is unknown. We discovered that the fluid within the channel is not actually hemolymph, but a medium composed principally of lipid from a new class. Three-dimensional imaging of this lipid channel revealed a previously undescribed tissue structure within the channel, which we refer to as the olivarius organ. Investigations into the function of the olivarius reveal de novo lipid synthesis indicating that it is producing these lipids in situ from acetate. The auditory role of this lipid channel was investigated using Laser Doppler vibrometry of the tympanal membrane, which shows that the displacement of the membrane is significantly increased when the lipid is removed from the auditory system. Neural sensitivity of the system, however, decreased upon removal of the lipid-a surprising result considering that in a typical auditory system both the mechanical and auditory sensitivity are positively correlated. These two results coupled with 3D modelling of the auditory system lead us to hypothesize a model for weta audition, relying strongly on the presence of the lipid channel. This is the first instance of lipids being associated with an auditory system outside of the Odentocete cetaceans, demonstrating convergence for the use of lipids in hearing.
Lomas, Kathryn F.; Greenwood, David R.; Windmill, James FC.; Jackson, Joseph C.; Corfield, Jeremy; Parsons, Stuart
Weta possess typical Ensifera ears. Each ear comprises three functional parts: two equally sized tympanal membranes, an underlying system of modified tracheal chambers, and the auditory sensory organ, the crista acustica. This organ sits within an enclosed fluid-filled channel–previously presumed to be hemolymph. The role this channel plays in insect hearing is unknown. We discovered that the fluid within the channel is not actually hemolymph, but a medium composed principally of lipid from a new class. Three-dimensional imaging of this lipid channel revealed a previously undescribed tissue structure within the channel, which we refer to as the olivarius organ. Investigations into the function of the olivarius reveal de novo lipid synthesis indicating that it is producing these lipids in situ from acetate. The auditory role of this lipid channel was investigated using Laser Doppler vibrometry of the tympanal membrane, which shows that the displacement of the membrane is significantly increased when the lipid is removed from the auditory system. Neural sensitivity of the system, however, decreased upon removal of the lipid–a surprising result considering that in a typical auditory system both the mechanical and auditory sensitivity are positively correlated. These two results coupled with 3D modelling of the auditory system lead us to hypothesize a model for weta audition, relying strongly on the presence of the lipid channel. This is the first instance of lipids being associated with an auditory system outside of the Odentocete cetaceans, demonstrating convergence for the use of lipids in hearing. PMID:23251553
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex. PMID:25879843
Balbani, Aracy Pereira Silveira; Montovani, Jair Cortez
Telecommunications systems emit radiofrequency, which is an invisible electromagnetic radiation. Mobile phones operate with microwaves (450900 MHz in the analog service, and 1,82,2 GHz in the digital service) very close to the users ear. The skin, inner ear, cochlear nerve and the temporal lobe surface absorb the radiofrequency energy. literature review on the influence of cellular phones on hearing and balance. systematic review. We reviewed papers on the influence of mobile phones on auditory and vestibular systems from Lilacs and Medline databases, published from 2000 to 2005, and also materials available in the Internet. Studies concerning mobile phone radiation and risk of developing an acoustic neuroma have controversial results. Some authors did not see evidences of a higher risk of tumor development in mobile phone users, while others report that usage of analog cellular phones for ten or more years increase the risk of developing the tumor. Acute exposure to mobile phone microwaves do not influence the cochlear outer hair cells function in vivo and in vitro, the cochlear nerve electrical properties nor the vestibular system physiology in humans. Analog hearing aids are more susceptible to the electromagnetic interference caused by digital mobile phones. there is no evidence of cochleo-vestibular lesion caused by cellular phones.
Costa-Faidella, Jordi; Grimm, Sabine; Slabu, Lavinia; Díaz-Santaella, Francisco; Escera, Carles
Single neurons in the primary auditory cortex of the cat show faster adaptation time constants to short- than long-term stimulus history. This ability to encode the complex past auditory stimulation in multiple time scales would enable the auditory system to generate expectations of the incoming stimuli. Here, we tested whether large neural populations exhibit this ability as well, by recording human auditory evoked potentials (AEP) to pure tones in a sequence embedding short- and long-term aspects of stimulus history. Our results yielded dynamic amplitude modulations of the P2 AEP to stimulus repetition spanning from milliseconds to tens of seconds concurrently, as well as amplitude modulations of the mismatch negativity AEP to regularity violations. A simple linear model of expectancy accounting for both short- and long-term stimulus history described our results, paralleling the behavior of neurons in the primary auditory cortex.
Bidelman, Gavin M; Moreno, Sylvain; Alain, Claude
Speech perception requires the effortless mapping from smooth, seemingly continuous changes in sound features into discrete perceptual units, a conversion exemplified in the phenomenon of categorical perception. Explaining how/when the human brain performs this acoustic-phonetic transformation remains an elusive problem in current models and theories of speech perception. In previous attempts to decipher the neural basis of speech perception, it is often unclear whether the alleged brain correlates reflect an underlying percept or merely changes in neural activity that covary with parameters of the stimulus. Here, we recorded neuroelectric activity generated at both cortical and subcortical levels of the auditory pathway elicited by a speech vowel continuum whose percept varied categorically from /u/ to /a/. This integrative approach allows us to characterize how various auditory structures code, transform, and ultimately render the perception of speech material as well as dissociate brain responses reflecting changes in stimulus acoustics from those that index true internalized percepts. We find that activity from the brainstem mirrors properties of the speech waveform with remarkable fidelity, reflecting progressive changes in speech acoustics but not the discrete phonetic classes reported behaviorally. In comparison, patterns of late cortical evoked activity contain information reflecting distinct perceptual categories and predict the abstract phonetic speech boundaries heard by listeners. Our findings demonstrate a critical transformation in neural speech representations between brainstem and early auditory cortex analogous to an acoustic-phonetic mapping necessary to generate categorical speech percepts. Analytic modeling demonstrates that a simple nonlinearity accounts for the transformation between early (subcortical) brain activity and subsequent cortical/behavioral responses to speech (>150-200 ms) thereby describing a plausible mechanism by which the
Getzmann, Stephan; Lewald, Jörg
Cortical processing of horizontal and vertical sound motion in free-field space was investigated using high-density electroencephalography in combination with standardized low-resolution brain electromagnetic tomography (sLORETA). Eighteen subjects heard sound stimuli that, after an initial stationary phase in a central position, started to move centrifugally, either to the left, to the right, upward, or downward. The delayed onset of both horizontal and vertical motion elicited a specific motion-onset response (MOR), resulting in widely distributed activations, with prominent maxima in primary and nonprimary auditory cortices, insula, and parietal lobe. The comparison of MORs to horizontal and vertical motion orientations did not indicate any significant differences in latency or topography. Contrasting the sLORETA solutions for the two motion orientations revealed only marginal activation in postcentral gyrus. These data are consistent with the notion that azimuth and elevation components of dynamic auditory spatial information are processed in common, rather than separate, cortical substrates. Furthermore, the findings support the assumption that the MOR originates at a stage of auditory analysis after the different spatial cues (interaural and monaural spectral cues) have been integrated into a unified space code.
Kajikawa, Yoshinao; Camalier, Corrie R.; de la Mothe, Lisa A.; D’Angelo, William R.; Sterbing-D’Angelo, Susanne J.; Hackett, Troy A.
We examined multiunit responses to tones and to 1/3 and 2/3 octave band-pass noise (BPN) in the marmoset primary auditory cortex (A1) and the caudomedial belt (CM). In both areas, BPN was more effective than tones, evoking multiunit responses at lower intensity and across a wider frequency range. Typically, the best responses to BPN remained at the characteristic frequency. Additionally, in both areas responses to BPN tended to be of greater magnitude and shorter latency than responses to tones. These effects are consistent with the integration of more excitatory inputs driven by BPN than by tones. While it is generally thought that single units in A1 prefer narrow band sounds such as tones, we found that best responses for multi units in both A1 and CM were obtained with noises of narrow spectral bandwidths. PMID:21540062
Corfield, Jeremy R; Long, Brendan; Krilow, Justin M; Wylie, Douglas R; Iwaniuk, Andrew N
Although it is clear that neural structures scale with body size, the mechanisms of this relationship are not well understood. Several recent studies have shown that the relationship between neuron numbers and brain (or brain region) size are not only different across mammalian orders, but also across auditory and visual regions within the same brains. Among birds, similar cellular scaling rules have not been examined in any detail. Here, we examine the scaling of auditory structures in birds and show that the scaling rules that have been established in the mammalian auditory pathway do not necessarily apply to birds. In galliforms, neuronal densities decrease with increasing brain size, suggesting that auditory brainstem structures increase in size faster than neurons are added; smaller brains have relatively more neurons than larger brains. The cellular scaling rules that apply to auditory brainstem structures in galliforms are, therefore, different to that found in primate auditory pathway. It is likely that the factors driving this difference are associated with the anatomical specializations required for sound perception in birds, although there is a decoupling of neuron numbers in brain structures and hair cell numbers in the basilar papilla. This study provides significant insight into the allometric scaling of neural structures in birds and improves our understanding of the rules that govern neural scaling across vertebrates.
Borra, Tobias; Versnel, Huib; Kemner, Chantal; van Opstal, A. John; van Ee, Raymond
After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this “octave effect” not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords. PMID:24003112
Chen, Kuen-Lin; Yang, Hong-Chang; Tsai, Sung-Ying; Liu, Yu-Wei; Liao, Shu-Hsien; Horng, Herng-Er; Lee, Yong-Ho; Kwon, Hyukchan
Superconducting quantum interference device (SQUID), which is a very sensitive magnetic sensor, has been widely used to detect the ultra-small magnetic signals in many different territories, especially in the biomagnetic measurement. In this study, a 128-channel SQUID first-order axial gradiometer system for whole-head magnetoencephalography (MEG) measurements was setup to characterize the auditory evoked magnetic fields (AEFs). A 500 Hz monaural pure tone persisting 425 ms with the sound pressure level of 80 dB was randomly applied to the left ear of subject with the inter-stimulus interval of 1.5 ˜ 2.8 s to prevent fatigue of nerves. We demonstrated the characteristic waveforms of AEFs can be accurately recorded and analyzed. Using source localization processes, the origins of AEFs were successfully calculated to be at the auditory cortices which are brain areas known for responsive to sound stimulus. A phantom experiment also proved the good localization accuracy of the established MEG system and measurement procedures. The validated performance of the SQUID system suggests that this technique can also be employed in other brain research.
Vitti, Simone Virginia; Blasca, Wanderléia Quinhoneiro; Sigulem, Daniel; Torres Pisa, Ivan
Adults and elderly users of hearing aids suffer psychosocial reactions as a result of hearing loss. Auditory rehabilitation is typically carried out with support from a speech therapist, usually in a clinical center. For these cases, there is a lack of computer-based self-training tools for minimizing the psychosocial impact of hearing deficiency. To develop and evaluate a web-based auditory self-training system for adult and elderly users of hearing aids. Two modules were developed for the web system: an information module based on guidelines for using hearing aids; and an auditory training module presenting a sequence of training exercises for auditory abilities along the lines of the auditory skill steps within auditory processing. We built aweb system using PHP programming language and a MySQL database .from requirements surveyed through focus groups that were conducted by healthcare information technology experts. The web system was evaluated by speech therapists and hearing aid users. An initial sample of 150 patients at DSA/HRAC/USP was defined to apply the system with the inclusion criteria that: the individuals should be over the age of 25 years, presently have hearing impairment, be a hearing aid user, have a computer and have internet experience. They were divided into two groups: a control group (G1) and an experimental group (G2). These patients were evaluated clinically using the HHIE for adults and HHIA for elderly people, before and after system implementation. A third web group was formed with users who were invited through social networks for their opinions on using the system. A questionnaire evaluating hearing complaints was given to all three groups. The study hypothesis considered that G2 would present greater auditory perception, higher satisfaction and fewer complaints than G1 after the auditory training. It was expected that G3 would have fewer complaints regarding use and acceptance of the system. The web system, which was named Sis
Hirsch, Sven; Reichold, Johannes; Schneider, Matthias; Székely, Gábor; Weber, Bruno
The cerebrovascular system continuously delivers oxygen and energy substrates to the brain, which is one of the organs with the highest basal energy requirement in mammals. Discontinuities in the delivery lead to fatal consequences for the brain tissue. A detailed understanding of the structure of the cerebrovascular system is important for a multitude of (patho-)physiological cerebral processes and many noninvasive functional imaging methods rely on a signal that originates from the vasculature. Furthermore, neurodegenerative diseases often involve the cerebrovascular system and could contribute to neuronal loss. In this review, we focus on the cortical vascular system. In the first part, we present the current knowledge of the vascular anatomy. This is followed by a theory of topology and its application to vascular biology. We then discuss possible interactions between cerebral blood flow and vascular topology, before summarizing the existing body of the literature on quantitative cerebrovascular topology. PMID:22472613
Stumpner, A.; von Helversen, D.
While the sensing of substrate vibrations is common among arthropods, the reception of sound pressure waves is an adaptation restricted to insects, which has arisen independently several times in different orders. Wherever studied, tympanal organs were shown to derive from chordotonal precursors, which were modified such that mechanosensitive scolopidia became attached to thin cuticular membranes backed by air-filled tracheal cavities (except in lacewings). The behavioural context in which hearing has evolved has strongly determined the design and properties of the auditory system. Hearing organs which have evolved in the context of predator avoidance are highly sensitive, preferentially in a broad range of ultrasound frequencies, which release rapid escape manoeuvres. Hearing in the context of communication does not only require recognition and discrimination of highly specific song patterns but also their localisation. Typically, the spectrum of the conspecific signals matches the best sensitivity of the receiver. Directionality is achieved by means of sophisticated peripheral structures and is further enhanced by neuronal processing. Side-specific gain control typically allows the insect to encode the loudest signal on each side. The filtered information is transmitted to the brain, where the final steps of pattern recognition and localisation occur. The outputs of such filter networks, modulated or gated by further processes (subsumed by the term motivation), trigger command neurones for specific behaviours. Altogether, the many improvements opportunistically evolved at any stage of acoustic information-processing ultimately allow insects to come up with astonishing acoustic performances similar to those achieved by vertebrates.
Tatagiba, M; Gharabaghi, A
Perceptional benefits and potential risks of electrical stimulation of the central auditory system are constantly changing due to ongoing developments and technical modifications. Therefore, we would like to introduce current treatment protocols and strategies that might have an impact on functional results of auditory brainstem implants (ABI) in profoundly deaf patients. Patients with bilateral tumours as a result of neurofibromatosis type 2 with complete dysfunction of the eighth cranial nerves are the most frequent candidates for auditory brainstem implants. Worldwide, about 300 patients have already received an ABI through a translabyrinthine or suboccipital approach supported by multimodality electrophysiological monitoring. Patient selection is based on disease course, clinical signs, audiological, radiological and psycho-social criteria. The ABI provides the patients with access to auditory information such as environmental sound awareness together with distinct hearing cues in speech. In addition, this device markedly improves speech reception in combination with lip-reading. Nonetheless, there is only limited open-set speech understanding. Results of hearing function are correlated with electrode design, number of activated electrodes, speech processing strategies, duration of pre-existing deafness and extent of brainstem deformation. Functional neurostimulation of the central auditory system by a brainstem implant is a safe and beneficial procedure, which may considerably improve the quality of life in patients suffering from deafness due to bilateral retrocochlear lesions. The auditory outcome may be improved by a new generation of microelectrodes capable of penetrating the surface of the brainstem to access more directly the auditory neurons.
López-Caballero, Fran; Zarnowiec, Katarzyna; Escera, Carles
Deviance detection is a key functional property of the auditory system that allows pre-attentive discrimination of incoming stimuli not conforming to a rule extracted from the ongoing constant stimulation, thereby proving that regularities in the auditory scene have been encoded in the auditory system. Using simple-feature stimulus deviations, regularity encoding and deviance detection have been reported in brain responses at multiple latencies of the human Auditory Evoked Potential (AEP), such as the Mismatch Negativity (MMN; peaking at 100-250ms from stimulus onset) and Middle-Latency Responses (MLR; peaking at 12-50ms). More complex levels of regularity violations, however, are only indexed by AEPs generated at higher stages of the auditory system, suggesting a hierarchical organization in the encoding of auditory regularities. The aim of the current study is to further characterize the auditory hierarchy of novelty responses, by assessing the sensitivity of MLR components to deviant probability manipulations. MMNs and MLRs were recorded in 24 healthy participants, using an oddball location paradigm with three different deviant probabilities (5%, 10% and 20%), and a reversed-standard (91.5%). We analyzed differences in the MLRs elicited to each of the deviant stimuli and the reversed-standard, as well as within deviant stimuli. Our results confirmed deviance detection at the level of both MLRs and MMN, but significant differences for deviant probabilities were found only for the MMN. These results suggest a functional dissociation between regularity encoding, already present at early stages of auditory processing, and the encoding of the probability with which this regularity is disrupted, which is only processed at higher stages of the auditory hierarchy.
Grassia, Filippo; Buhry, Laure; Lévi, Timothée; Tomas, Jean; Destexhe, Alain; Saïghi, Sylvain
Nowadays, many software solutions are currently available for simulating neuron models. Less conventional than software-based systems, hardware-based solutions generally combine digital and analog forms of computation. In previous work, we designed several neuromimetic chips, including the Galway chip that we used for this paper. These silicon neurons are based on the Hodgkin–Huxley formalism and they are optimized for reproducing a large variety of neuron behaviors thanks to tunable parameters. Due to process variation and device mismatch in analog chips, we use a full-custom fitting method in voltage-clamp mode to tune our neuromimetic integrated circuits. By comparing them with experimental electrophysiological data of these cells, we show that the circuits can reproduce the main firing features of cortical cell types. In this paper, we present the experimental measurements of our system which mimic the four most prominent biological cells: fast spiking, regular spiking, intrinsically bursting, and low-threshold spiking neurons into analog neuromimetic integrated circuit dedicated to cortical neuron simulations. This hardware and software platform will allow to improve the hybrid technique, also called “dynamic-clamp,” that consists of connecting artificial and biological neurons to study the function of neuronal circuits. PMID:22163213
Easwar, Vijayalakshmi; Purcell, David W.; Scollie, Susan D.
Background. Functioning of nonlinear hearing aids varies with characteristics of input stimuli. In the past decade, aided speech evoked cortical auditory evoked potentials (CAEPs) have been proposed for validation of hearing aid fittings. However, unlike in running speech, phonemes presented as stimuli during CAEP testing are preceded by silent intervals of over one second. Hence, the present study aimed to compare if hearing aids process phonemes similarly in running speech and in CAEP testing contexts. Method. A sample of ten hearing aids was used. Overall phoneme level and phoneme onset level of eight phonemes in both contexts were compared at three input levels representing conversational speech levels. Results. Differences of over 3 dB between the two contexts were noted in one-fourth of the observations measuring overall phoneme levels and in one-third of the observations measuring phoneme onset level. In a majority of these differences, output levels of phonemes were higher in the running speech context. These differences varied across hearing aids. Conclusion. Lower output levels in the isolation context may have implications for calibration and estimation of audibility based on CAEPs. The variability across hearing aids observed could make it challenging to predict differences on an individual basis. PMID:23316236
Szalkowski, Caitlin E.; Booker, Anne B.; Truong, Dongnhu T.; Threlkeld, Steven W.; Rosen, Glenn D.; Fitch, Roslyn H.
The current study investigated the behavioral and neuroanatomical effects of embryonic knockdown of the candidate dyslexia susceptibility gene (CDSG) homolog Dyx1c1 through RNA interference in rats. Specifically, we examined long-term effects on visual attention abilities in males, in addition to assessing rapid and complex auditory processing abilities in male and, for the first time, female rats. Results replicated prior evidence of complex acoustic processing deficits in Dyx1c1 male rats, and revealed new evidence of comparable deficits in Dyx1c1 female rats. Moreover, we found new evidence that knocking down Dyx1c1 produced orthogonal impairments in visual attention in the male sub-group. Stereological analyses of male brains from prior RNA interference studies revealed that, despite consistent visible evidence of disruptions in neuronal migration (i.e., heterotopia), knockdown of Dyx1c1 did not significantly alter cortical volume, hippocampal volume, or midsagittal area of the corpus callosum (measured in a separate cohort of like-treated Dyx1c1 male rats). Dyx1c1 transfection did however lead to significant changes in medial geniculate nucleus (MGN) anatomy, with a significant shift to smaller MGN neurons in Dyx1c1 transfected animals. Combined results provide important information about the impact of Dyx1c1 on behavioral functions that parallel domains known to be affected in language impaired populations, as well as information about widespread changes to the brain following early disruption of this candidate dyslexia susceptibility gene. PMID:23594585
Guenther, Frank H; Hickok, Gregory
This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer. © 2015 Elsevier B.V. All rights reserved.
Gupta, Disha; Hill, N. Jeremy; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Schalk, Gerwin
Objective Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor, or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. Approach We study these questions using electrocorticographic (ECoG) signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (6 for offline parameter optimization, 6 for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main results Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelope. These decoders were trained separately and executed simultaneously in real time. Significance This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple
Corina, David P.; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A.; Coffey-Corina, Sharon
Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians’ best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2–8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1–N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children’s early auditory and visual
Corina, David P; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A; Coffey-Corina, Sharon
Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians' best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2-8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1-N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children's early auditory and visual ERP
Shelley, A M; Silipo, G; Javitt, D C
Event-related potentials (ERPs) were recorded from 15 schizophrenic patients and 17 normal controls in an auditory oddball paradigm in order to investigate the effects of stimulus probability and interstimulus interval (ISI) on deficits in mismatch negativity (MMN) generation in schizophrenia. MMN amplitude was reduced for schizophrenics overall, with the degree of deficit increasing as deviant probability decreased. In contrast, schizophrenic subjects were no more affected by alterations in ISI than controls. The experimental design also permitted evaluation of N1 generation as a function of ISI in schizophrenia. Schizophrenic subjects showed decreased N1 amplitude across conditions, with the degree of deficit increasing with increasing ISI. For both MMN and N1, therefore, the degree of deficit increased with increasing component amplitude in normals, implying that the deficit in ERP generation in schizophrenia may reflect a decrease in maximal current flow through underlying neuronal ensembles. The observed pattern of dysfunction is consistent both with observations of impaired precision of processing in schizophrenia, and with predictions of the PCP/NMDA model.
Sun, Yujiao J; Kim, Young-Joo; Ibrahim, Leena A; Tao, Huizhong W; Zhang, Li I
Corticofugal projections from the primary auditory cortex (A1) have been shown to play a role in modulating subcortical processing. However, functional properties of the corticofugal neurons and their synaptic circuitry mechanisms remain unclear. In this study, we performed in vivo whole-cell recordings from layer 5 (L5) pyramidal neurons in the rat A1 and found two distinct neuronal classes according to their functional properties. Intrinsic-bursting (IB) neurons, the L5 corticofugal neurons, exhibited early and rather unselective spike responses to a wide range of frequencies. The exceptionally broad spectral tuning of IB neurons was attributable to their broad excitatory inputs with long temporal durations and inhibitory inputs being more narrowly tuned than excitatory inputs. This uncommon pattern of excitatory-inhibitory interplay was attributed initially to a broad thalamocortical convergence onto IB neurons, which also receive temporally prolonged intracortical excitatory input as well as feedforward inhibitory input at least partially from more narrowly tuned fast-spiking inhibitory neurons. In contrast, regular-spiking neurons, which are mainly corticocortical, exhibited sharp frequency tuning similar to L4 pyramidal cells, underlying which are well-matched purely intracortical excitation and inhibition. The functional dichotomy among L5 pyramidal neurons suggests two distinct processing streams. The spectrally and temporally broad synaptic integration in IB neurons may ensure robust feedback signals to facilitate subcortical function and plasticity in a general manner.
Schrode, Katrina M; Bee, Mark A
Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. © 2015. Published by The Company of Biologists Ltd.
Schrode, Katrina M.; Bee, Mark A.
ABSTRACT Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male–male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467
Parazzini, Marta; Lutman, Mark E; Moulin, Annie; Barnel, Cécile; Sliwinska-Kowalska, Mariola; Zmyslony, Marek; Hernadi, Istvan; Stefanics, Gabor; Thuroczy, Gyorgy; Ravazzani, Paolo
The aim of this study, which was performed in the framework of the European project EMFnEAR, was to investigate the potential effects of Universal Mobile Telecommunications System (UMTS, also known as 3G) exposure at a high specific absorption rate (SAR) on the human auditory system. Participants were healthy young adults with no hearing or ear disorders. Auditory function was assessed immediately before and after exposure to radiofrequency (RF) radiation, and only the exposed ear was tested. Tests for the assessment of auditory function were hearing threshold level (HTL), distortion product otoacoustic emissions (DPOAE), contralateral suppression of transiently evoked otoacoustic emission (CAS effect on TEOAE), and auditory evoked potentials (AEP). The exposure consisted of speech at a typical conversational level delivered via an earphone to one ear, plus genuine or sham RF-radiation exposure obtained by an exposure system based on a patch antenna and controlled by software. Results from 73 participants did not show any consistent pattern of effects on the auditory system after a 20-min UMTS exposure at 1947 MHz at a maximum SAR over 1 g of 1.75 W/kg at a position equivalent to the cochlea. Analysis entailed a double-blind comparison of genuine and sham exposure. It is concluded that short-term UMTS exposure at this relatively high SAR does not cause measurable immediate effects on the human auditory system.
Costa-Faidella, Jordi; Baldeweg, Torsten; Grimm, Sabine; Escera, Carles
Neural activity in the auditory system decreases with repeated stimulation, matching stimulus probability in multiple timescales. This phenomenon, known as stimulus-specific adaptation, is interpreted as a neural mechanism of regularity encoding aiding auditory object formation. However, despite the overwhelming literature covering recordings from single-cell to scalp auditory-evoked potential (AEP), stimulation timing has received little interest. Here we investigated whether timing predictability enhances the experience-dependent modulation of neural activity associated with stimulus probability encoding. We used human electrophysiological recordings in healthy participants who were exposed to passive listening of sound sequences. Pure tones of different frequencies were delivered in successive trains of a variable number of repetitions, enabling the study of sequential repetition effects in the AEP. In the predictable timing condition, tones were delivered with isochronous interstimulus intervals; in the unpredictable timing condition, interstimulus intervals varied randomly. Our results show that unpredictable stimulus timing abolishes the early part of the repetition positivity, an AEP indexing auditory sensory memory trace formation, while leaving the later part (≈ >200 ms) unaffected. This suggests that timing predictability aids the propagation of repetition effects upstream the auditory pathway, most likely from association auditory cortex (including the planum temporale) toward primary auditory cortex (Heschl's gyrus) and beyond, as judged by the timing of AEP latencies. This outcome calls for attention to stimulation timing in future experiments regarding sensory memory trace formation in AEP measures and stimulus probability encoding in animal models.
Lim, Yoonseob; Lagoy, Ryan; Shinn-Cunningham, Barbara G; Gardner, Timothy J
This study examines how temporally patterned stimuli are transformed as they propagate from primary to secondary zones in the thalamorecipient auditory pallium in zebra finches. Using a new class of synthetic click stimuli, we find a robust mapping from temporal sequences in the primary zone to distinct population vectors in secondary auditory areas. We tested whether songbirds could discriminate synthetic click sequences in an operant setup and found that a robust behavioral discrimination is present for click sequences composed of intervals ranging from 11 ms to 40 ms, but breaks down for stimuli composed of longer inter-click intervals. This work suggests that the analog of the songbird auditory cortex transforms temporal patterns to sequence-selective population responses or ‘spatial codes', and that these distinct population responses contribute to behavioral discrimination of temporally complex sounds. DOI: http://dx.doi.org/10.7554/eLife.18205.001 PMID:27897971
Begault, D R
The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece headsets, but there was no significant difference in the number of targets acquired.
Begault, Durand R.
The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.
Franosch, Jan-Moritz P.; Kempter, Richard; Fastl, Hugo; van Hemmen, J. Leo
The Zwicker tone is an auditory aftereffect. For instance, after switching off a broadband noise with a spectral gap, one perceives it as a lingering pure tone with the pitch in the gap. It is a unique illusion in that it cannot be explained by known properties of the auditory periphery alone. Here we introduce a neuronal model explaining the Zwicker tone. We show that a neuronal noise-reduction mechanism in conjunction with dominantly unilateral inhibition explains the effect. A pure tone’s “hole burning” in noisy surroundings is given as an illustration.
Strauss, Johannes; Lakes-Harlan, Reinhard
Cicadas (Homoptera: Auchenorrhyncha: Cicadidae) use acoustic signalling for mate attraction and perceive auditory signals by a tympanal organ in the second abdominal segment. The main structural features of the ear are the tympanum, the sensory organ consisting of numerous scolopidial cells, and the cuticular link between sensory neurones and tympanum (tympanal ridge and apodeme). Here, a first investigation of the postembryonic development of the auditory system is presented. In insects, sensory neurones usually differentiate during embryogenesis, and sound-perceiving structures form during postembryogenesis. Cicadas have an elongated and subterranian postembryogenesis which can take several years until the final moult. The neuroanatomy and functional morphology of the auditory system of the cicada Okanagana rimosa (Say) are documented for the adult and the three last larval stages. The sensory organ and the projection of sensory afferents to the CNS are present in the earliest stages investigated. The cuticular structures of the tympanum, the tympanal frame holding the tympanum, and the tympanal ridge differentiate in the later stages of postembryogenesis. Thus, despite the different life styles of larvae and adults, the neuronal components of the cicada auditory system develop already during embryogenesis or early postembryogenesis, and sound-perceiving structures like tympana are elaborated later in postembryogenesis. The life cycle allows comparison of cicada development to other hemimetabolous insects with respect to the influence of specially adapted life cycle stages on auditory maturation. The neuronal development of the auditory system conforms to the timing in other hemimetabolous insects.
Perception is the process of transmitting and interpreting sensory information, and the primary somatosensory (SI) area in the human cortex is the main sensory receptive area for the sensation of touch. The elaborate neuroanatomical connectivity that subserves the neuronal communication between adjacent and near-adjacent regions within sensory cortex has been widely recognized to be essential to normal sensory function. As a result, systemic cortical alterations that impact the cortical regional interaction, as associated with many neurological disorders, are expected to have significant impact on sensory perception. Recently, our research group has developed a novel sensory diagnostic system that employs quantitative sensory testing methods and is able to non-invasively assess central nervous system healthy status. The intent of this study is to utilize quantitative sensory testing methods that were designed to generate discriminable perception to objectively and quantitatively assess the impacts of different conditions on human sensory information processing capacity. The correlation between human perceptions with observations from animal research enables a better understanding of the underlying neurophysiology of human perception. Additional findings on different subject populations provide valuable insight of the underlying mechanisms for the development and maintenance of different neurological diseases. During the course of the study, several protocols were designed and utilized. And this set of sensory-based perceptual metrics was employed to study the effects of different conditions (non-noxious thermal stimulation, chronic pain stage, and normal aging) on sensory perception. It was found that these conditions result in significant deviations of the subjects' tactile information processing capacities from normal values. Although the observed shift of sensory detection sensitivity could be a result of enhanced peripheral activity, the changes in the effects
Froemke, Robert C.; Schreiner, Christoph E.
Processing of auditory information requires constant adjustment due to alterations of the environment and changing conditions in the nervous system with age, health, and experience. Consequently, patterns of activity in cortical networks have complex dynamics over a wide range of timescales, from milliseconds to days and longer. In the primary auditory cortex (AI), multiple forms of adaptation and plasticity shape synaptic input and action potential output. However, the variance of neuronal responses has made it difficult to characterize AI receptive fields and to determine the function of AI in processing auditory information such as vocalizations. Here we describe recent studies on the temporal modulation of cortical responses and consider the relation of synaptic plasticity to neural coding. PMID:26497430
Papadaniil, Chrysa D; Kosmidou, Vasiliki E; Tsolaki, Anthoula; Tsolaki, Magda; Kompatsiaris, Ioannis Yiannis; Hadjileontiadis, Leontios J
Recent evidence suggests that cross-frequency coupling (CFC) plays an essential role in multi-scale communication across the brain. The amplitude of the high frequency oscillations, responsible for local activity, is modulated by the phase of the lower frequency activity, in a task and region-relevant way. In this paper, we examine this phase-amplitude coupling in a two-tone oddball paradigm for the low frequency bands (delta, theta, alpha, and beta) and determine the most prominent CFCs. Data consisted of cortical time series, extracted by applying three-dimensional vector field tomography (3D-VFT) to high density (256 channels) electroencephalography (HD-EEG), and CFC analysis was based on the phase-amplitude coupling metric, namely PAC. Our findings suggest CFC spanning across all brain regions and low frequencies. Stronger coupling was observed in the delta band, that is closely linked to sensory processing. However, theta coupling was reinforced in the target tone response, revealing a task-dependent CFC and its role in brain networks communication.
Nowotny, Manuela; Udayashankar, Arun Palghat; Weber, Melanie; Hummel, Jennifer; Kössl, Manfred
Place based frequency representation, called tonotopy,is a typical property of hearing organs for the discrimination of different frequencies. Due to its coiled structure and secure housing, it is difficult access the mammalian cochlea. Hence, our knowledge about in vivo inner-ear mechanics is restricted to small regions. In this study, we present in vivo measurements that focus on the easily accessible, uncoiled auditory organs in bushcrickets, which are located in their foreleg tibiae. Sound enters the body via an opening at the lateral side of the thorax and passes through a horn-shaped acoustic trachea before reaching the high frequency hearing organ called crista acustica. In addition to the acoustic trachea as structure that transmits incoming sound towards the hearing organ, bushcrickets also possess two tympana, specialized plate-like structures, on the anterior and posterior side of each tibia. They provide a secondary path of excitation for the sensory receptors at low frequencies. We investigated the mechanics of the crista acustica in the tropical bushcricket Mecopoda elongata. The frequency-dependent motion of the crista acustica was captured using a laser-Doppler-vibrometer system. Using pure tone stimulation of the crista acustica, we could elicit traveling waves along the length of the hearing organ that move from the distal high frequency to the proximal low frequency region. In addition, distinct maxima in the velocity response of the crista acustica could be measured at ˜7 and ˜17 kHz. The travelling-wave-based tonotopy provides the basis for mechanical frequency discrimination along the crista acustica and opens up new possibility to investigate traveling wave mechanics in vivo.
Koohi, Nehzat; Vickers, Deborah; Chandrashekar, Hoskote; Tsang, Benjamin; Werring, David; Bamiou, Doris-Eva
Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability and Hearing Handicap Inventory for Elderly questionnaires. Nine out of these 50 patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the "crescent of sound") with and without FM systems. The signal-to-noise ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke. Implications for Rehabilitation This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. All cases significantly improved speech perception in noise with the FM systems, when noise was spatially separated from the
Maffei, Chiara; Soria, Guadalupe; Prats-Galino, Alberto; Catani, Marco
The recent advent of diffusion imaging tractography has opened a new window into the in vivo white-matter anatomy of the human brain. This is of particular importance for the connections of the auditory system, which may have undergone substantial development in humans in relation to language. However, tractography of the human auditory pathways has proved to be challenging due to current methodologic limitations and the intrinsic anatomic features of the subcortical connections that carry acoustic information in the brainstem. More reliable findings are forthcoming from tractography studies of corticocortical connections associated with language processing. In this chapter we introduce the reader to basic principles of diffusion imaging and tractography. A selected review of the tractography studies of the auditory pathways will be presented, with particular attention given to the cerebral association pathways of the temporal lobe. Finally, new diffusion methods based on advanced model for mapping fiber crossing will be discussed in the context of the auditory and language networks.
Ramanathan, Dhakshin S.; Conner, James M.; Anilkumar, Arjun A.
Previous studies reported that early postnatal cholinergic lesions severely perturb early cortical development, impairing neuronal cortical migration and the formation of cortical dendrites and synapses. These severe effects of early postnatal cholinergic lesions preclude our ability to understand the contribution of cholinergic systems to the later-stage maturation of topographic cortical representations. To study cholinergic mechanisms contributing to the later maturation of motor cortical circuits, we first characterized the temporal course of cortical motor map development and maturation in rats. In this study, we focused our attention on the maturation of cortical motor representations after postnatal day 25 (PND 25), a time after neuronal migration has been accomplished and cortical volume has reached adult size. We found significant maturation of cortical motor representations after this time, including both an expansion of forelimb representations in motor cortex and a shift from proximal to distal forelimb representations to an extent unexplainable by simple volume enlargement of the neocortex. Specific cholinergic lesions placed at PND 24 impaired enlargement of distal forelimb representations in particular and markedly reduced the ability to learn skilled motor tasks as adults. These results identify a novel and essential role for cholinergic systems in the late refinement and maturation of cortical circuits. Dysfunctions in this system may constitute a mechanism of late-onset neurodevelopmental disorders such as Rett syndrome and schizophrenia. PMID:25505106
Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.
Alvarez, Francisco Jose; Revuelta, Miren; Santaolalla, Francisco; Alvarez, Antonia; Lafuente, Hector; Arteaga, Olatz; Alonso-Alconada, Daniel; Sanchez-del-Rey, Ana; Hilario, Enrique; Martinez-Ibargüen, Agustin
Hypoxia-ischemia (HI) is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets. Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs) of newborn piglets exposed to acute hypoxia/ischemia (n = 6) and a control group with no such exposure (n = 10). ABRs were recorded for both ears before the start of the experiment (baseline), after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury. Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant. The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.
Effect of Neonatal Asphyxia on the Impairment of the Auditory Pathway by Recording Auditory Brainstem Responses in Newborn Piglets: A New Experimentation Model to Study the Perinatal Hypoxic-Ischemic Damage on the Auditory System
Alvarez, Francisco Jose; Revuelta, Miren; Santaolalla, Francisco; Alvarez, Antonia; Lafuente, Hector; Arteaga, Olatz; Alonso-Alconada, Daniel; Sanchez-del-Rey, Ana; Hilario, Enrique; Martinez-Ibargüen, Agustin
Introduction Hypoxia–ischemia (HI) is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets. Method Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs) of newborn piglets exposed to acute hypoxia/ischemia (n = 6) and a control group with no such exposure (n = 10). ABRs were recorded for both ears before the start of the experiment (baseline), after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury. Results Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant. Conclusion The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway. PMID:26010092
Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César
Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway.
Zoefel, Benedikt; Reddy Pasham, Naveen; Brüers, Sasskia; VanRullen, Rufin
Evidence for rhythmic or 'discrete' sensory processing is abundant for the visual system, but sparse and inconsistent for the auditory system. Fundamental differences in the nature of visual and auditory inputs might account for this discrepancy: whereas the visual system mainly relies on spatial information, time might be the most important factor for the auditory system. In contrast to vision, temporal subsampling (i.e. taking 'snapshots') of the auditory input stream might thus prove detrimental for the brain as essential information would be lost. Rather than embracing the view of a continuous auditory processing, we recently proposed that discrete 'perceptual cycles' might exist in the auditory system, but on a hierarchically higher level of processing, involving temporally more stable features. This proposal leads to the prediction that the auditory system would be more robust to temporal subsampling when applied on a 'high-level' decomposition of auditory signals. To test this prediction, we constructed speech stimuli that were subsampled at different frequencies, either at the input level (following a wavelet transform) or at the level of auditory features (on the basis of LPC vocoding), and presented them to human listeners. Auditory recognition was significantly more robust to subsampling in the latter case, that is on a relatively high level of auditory processing. Although our results do not directly demonstrate perceptual cycles in the auditory domain, they (a) show that their existence is possible without disrupting temporal information to a critical extent and (b) confirm our proposal that, if they do exist, they should operate on a higher level of auditory processing.
Yu, Yan H.; Wagner, Monica
The goal of the current analysis was to examine the maturation of cortical auditory evoked potentials (CAEPs) from three months of age to eight years of age. The superior frontal positive-negative-positive sequence (P1, N2, P2) and the temporal site, negative-positive-negative sequence (possibly, Na, Ta, Tb of the T-complex) were examined. Event-related potentials were recorded from 63 scalp sites to a 250- ms vowel. Amplitude and latency of peaks were measured at left and right frontal sites (near Fz) and at left and right temporal sites (T7 and T8). In addition the largest peak (typically corresponding to P1) was selected from global field power (GFP). The results revealed a large positive peak (P1) easily identified at frontal sites across all ages. The N2 emerged after 6 months of age and the following P2 between 8 and 30 months of age. The latencies of these peaks decreased exponentially with the most rapid decrease observed for P1. For amplitude, only P1 showed a clear relationship with age, becoming more positive in a somewhat linear fashion. At the temporal sites only a negative peak, which might be Na, was clearly observed at both left and right sites in children older than 14 months and peaking between 100 and 200 ms. P1 measures at frontal sites and Na peak latencies were moderately correlated. The temporal negative peak latency showed a different maturational timecourse (linear in nature) than the P1 peak, suggesting at least partial independence. Distinct Ta (positive) and Tb (negative) peaks, following Na and peaking between 120 and 220 ms were not consistently found in most age groups of children, except Ta which was present in 7 year olds. Future research, which includes manipulation of stimulus factors, and use of modeling techniques will be needed to explain the apparent, protracted maturation of the temporal site measures in the current study. PMID:25219893
Dinces, Elizabeth; Sussman, Elyse
Objectives/Hypothesis The environmental complexity that sounds are presented in, as well as the stimulus presentation rate, influences how sound intensity is centrally encoded with differences between children and adults. Study Design Cortical auditory evoked potential (CAEP) comparison study in children and adults examining two stimulus rates and three different stimulus contexts. Methods Twelve 10 and 11 year olds and 11 adults were studied in two experiments examining the CAEP to a 1-KHz, 50-ms tone. A Slow-Rate experiment at 750-ms stimulus onset asynchrony (SOA) compared the CAEPs of 78 dB to 86 dB SPL in 2 complexity conditions. A Fast-Rate experiment was performed at 125 ms SOA with the same conditions plus an additional complexity condition. Repeated measures and mixed-model analysis of variance (ANOVA) was used to examine the latency and amplitude of the CAEP components. Results CAEP amplitudes and latencies were significantly affected by rate, intensity, and age with complexity interacting in multiple mixed-mode ANOVAs. P1 was the only CAEP component present at the Fast Rate. There were main effects of rate, age, and stimulus intensity level on the CAEP amplitudes and latencies. Maturational differences were seen in the interactions of intensity with complexity for the different CAEP components. Conclusions Complexity of the sound environment was reflected in the relative amplitude of the CAEPs evoked by sound intensity. The effect of stimulus intensity depended on the complexity of the surrounding environment. Effects of the surrounding sounds were different in children than in adults. PMID:21792970
Port, Russell G; Gaetz, William; Bloy, Luke; Wang, Dah-Jyuu; Blaskey, Lisa; Kuschner, Emily S; Levy, Susan E; Brodkin, Edward S; Roberts, Timothy P L
Autism spectrum disorder (ASD) is hypothesized to arise from imbalances between excitatory and inhibitory neurotransmission (E/I imbalance). Studies have demonstrated E/I imbalance in individuals with ASD and also corresponding rodent models. One neural process thought to be reliant on E/I balance is gamma-band activity (Gamma), with support arising from observed correlations between motor, as well as visual, Gamma and underlying GABA concentrations in healthy adults. Additionally, decreased Gamma has been observed in ASD individuals and relevant animal models, though the direct relationship between Gamma and GABA concentrations in ASD remains unexplored. This study combined magnetoencephalography (MEG) and edited magnetic resonance spectroscopy (MRS) in 27 typically developing individuals (TD) and 30 individuals with ASD. Auditory cortex localized phase-locked Gamma was compared to resting Superior Temporal Gyrus relative cortical GABA concentrations for both children/adolescents and adults. Children/adolescents with ASD exhibited significantly decreased GABA+/Creatine (Cr) levels, though typical Gamma. Additionally, these children/adolescents lacked the typical maturation of GABA+/Cr concentrations and gamma-band coherence. Furthermore, children/adolescents with ASD additionally failed to exhibit the typical GABA+/Cr to gamma-band coherence association. This altered coupling during childhood/adolescence may result in Gamma decreases observed in the adults with ASD. Therefore, individuals with ASD exhibit improper local neuronal circuitry maturation during a childhood/adolescence critical period, when GABA is involved in configuring of such circuit functioning. Provocatively a novel line of treatment is suggested (with a critical time window); by increasing neural GABA levels in children/adolescents with ASD, proper local circuitry maturation may be restored resulting in typical Gamma in adulthood. Autism Res 2017, 10: 593-607. © 2016 International Society for
Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J
The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing.
Oertel, Donata; Young, Eric D
The shapes of the head and ears of mammals are asymmetrical top-to-bottom and front-to-back. Reflections of sounds from these structures differ with the angle of incidence, producing cues for monaural sound localization in the spectra of the stimuli at the eardrum. Neurons in the dorsal cochlear nucleus (DCN) respond specifically to spectral cues and integrate them with somatosensory, vestibular and higher-level auditory information through parallel fiber inputs in a cerebellum-like circuit. Synapses between parallel fibers and their targets show long-term potentiation (LTP) and long-term depression (LTD), whereas those between auditory nerve fibers and their targets do not. This paper discusses the integration of acoustic and the proprioceptive information in terms of possible computational roles for the DCN.
Fritzsch, Bernd; Pan, Ning; Jahan, Israt; Duncan, Jeremy S; Kopecky, Benjamin J; Elliott, Karen L; Kersigo, Jennifer; Yang, Tian
The tetrapod auditory system transmits sound through the outer and middle ear to the organ of Corti or other sound pressure receivers of the inner ear where specialized hair cells translate vibrations of the basilar membrane into electrical potential changes that are conducted by the spiral ganglion neurons to the auditory nuclei. In other systems, notably the vertebrate limb, a detailed connection between the evolutionary variations in adaptive morphology and the underlying alterations in the genetic basis of development has been partially elucidated. In this review, we attempt to correlate evolutionary and partially characterized molecular data into a cohesive perspective of the evolution of the mammalian organ of Corti out of the tetrapod basilar papilla. We propose a stepwise, molecularly partially characterized transformation of the ancestral, vestibular developmental program of the vertebrate ear. This review provides a framework to decipher both discrete steps in development and the evolution of unique functional adaptations of the auditory system. The combined analysis of evolution and development establishes a powerful cross-correlation where conclusions derived from either approach become more meaningful in a larger context which is not possible through exclusively evolution or development centered perspectives. Selection may explain the survival of the fittest auditory system, but only developmental genetics can explain the arrival of the fittest auditory system. [Modified after (Wagner 2011)]. © 2013 Wiley Periodicals, Inc.
Frisina, Robert D.
Hearing loss can result from disorders or damage to the ear (peripheral auditory system) or the brain (central auditory system). Here, the basic structure and function of the central auditory system will be highlighted as relevant to cases of permanent hearing loss where assistive devices (hearing aids) are called for. The parts of the brain used for hearing are altered in two basic ways in instances of hearing loss: (1) Damage to the ear can reduce the number and nature of input channels that the brainstem receives from the ear, causing plasticity of the central auditory system. This plasticity may partially compensate for the peripheral loss, or add new abnormalities such as distorted speech processing or tinnitus. (2) In some situations, damage to the brain can occur independently of the ear, as may occur in cases of head trauma, tumors or aging. Implications of deficits to the central auditory system for speech perception in noise, hearing aid use and future innovative circuit designs will be provided to set the stage for subsequent presentations in this special educational session. [Work supported by NIA-NIH Grant P01 AG09524 and the International Center for Hearing & Speech Research, Rochester, NY.
Xu, Qin; Ye, Datian
Auditory temporal integration (ATI) has been widely described in psychoacoustic studies, especially for loudness perception. Loudness increases with increasing sound duration for durations up to a time constant about 100 ~ 200 ms, and then loudness becomes saturated with more duration increase. However, the electrophysiological mechanism underlying the ATI phenomenon has not been well understood. To investigate ATI at the brainstem level of auditory system and its relationship to cortical and behavioral ATI, frequency follow response (FFR) was acquired in our study. Simultaneously, ATI in auditory cortex was evaluated by cortical response P1. Behavioral loudness and electrophysiological measures were estimated from normal-hearing young adults for vowel /a/ whose durations varied from 50 ms to 175 ms. Significant effects of stimulus duration were found both on FFR and P1 amplitudes. Linear regression analysis revealed that as stimulus duration increased, brainstem FFR amplitude was significantly associated with cortical P1 amplitude and behavioral loudness, which confirmed the existence of temporal integration in auditory brainstem. Moreover, behavioral loudness ATI was better predicted using brainstem and cortical measures together than merely using each one separately, indicating an interplay and coordination for ATI across the three levels along auditory pathway.
Silva, Liliane Aparecida Fagundes; Magliaro, Fernanda Cristina Leite; Carvalho, Ana Claudia Martinho de; Matas, Carla Gentile
The purpose of this study was to monitor the emergence and changes to the components of the Long Latency Auditory Evoked Potentials (LLAEP) in normal hearing children. This longitudinal study included children of both genders: seven aged between 10 and 35 months, and eight children between 37 and 63 months. The electrophysiological hearing evaluation consisted of analysis of LLAEP obtained in a sound field generated with loudspeakers positioned at an azimuth of 90°, through which the syllable /ba/ was played at an intensity of 70 dB HL. Each child underwent an initial evaluation followed by two re-evaluations three and nine months later. The emergence of LLAEP components across the nine-month follow-up period was observed. P1 and N2 were the most common components in children of this age range. There was no statistically significant difference regarding the occurrence of P1, N1, P2, and N2 components amongst younger and older children. Regarding latency values, the greatest changes overtime were observed in the P1 component for younger children and in the N2 component for older children. Only the P1 component significantly differed between the groups, with the highest latency values observed in younger children. LLAEP maturation occurs gradually and the emergence of complex components appears to be related more to the maturation of the central auditory nervous system than to chronological age.
Hackett, Troy A.; Rakic, Pasko; Levitt, Pat; Polley, Daniel B.
Auditory stimulus representations are dynamically maintained by ascending and descending projections linking the auditory cortex (Actx), medial geniculate body (MGB), and inferior colliculus. Although the extent and topographic specificity of descending auditory corticofugal projections can equal or surpass that of ascending corticopetal projections, little is known about the molecular mechanisms that guide their development. Here, we used in utero gene electroporation to examine the role of EphA receptor signaling in the development of corticothalamic (CT) and corticocollicular connections. Early in postnatal development, CT axons were restricted to a deep dorsal zone (DDZ) within the MGB that expressed low levels of the ephrin-A ligand. By hearing onset, CT axons had innervated surrounding regions of MGB in control-electroporated mice but remained fixed within the DDZ in mice overexpressing EphA7. In vivo neurophysiological recordings demonstrated a corresponding reduction in spontaneous firing rate, but no changes in sound-evoked responsiveness within MGB regions deprived of CT innervation. Structural and functional CT disruption occurred without gross alterations in thalamocortical connectivity. These data demonstrate a potential role for EphA/ephrin-A signaling in the initial guidance of corticofugal axons and suggest that “genetic rewiring” may represent a useful functional tool to alter cortical feedback without silencing Actx. PMID:22490549
Wong, Carmen; Chabot, Nicole; Kok, Melanie A; Lomber, Stephen G
Cross-modal plasticity following peripheral sensory loss enables deprived cortex to provide enhanced abilities in remaining sensory systems. These functional adaptations have been demonstrated in cat auditory cortex following early-onset deafness in electrophysiological and psychophysical studies. However, little information is available concerning any accompanying structural compensations. To examine the influence of sound experience on areal cartography, auditory cytoarchitecture was examined in hearing cats, early-deaf cats, and cats with late-onset deafness. Cats were deafened shortly after hearing onset or in adulthood. Cerebral cytoarchitecture was revealed immunohistochemically using SMI-32, a monoclonal antibody used to distinguish auditory areas in many species. Auditory areas were delineated in coronal sections and their volumes measured. Staining profiles observed in hearing cats were conserved in early- and late-deaf cats. In all deaf cats, dorsal auditory areas were the most mutable. Early-deaf cats showed further modifications, with significant expansions in second auditory cortex and ventral auditory field. Borders between dorsal auditory areas and adjacent visual and somatosensory areas were shifted ventrally, suggesting expanded visual and somatosensory cortical representation. Overall, this study shows the influence of acoustic experience in cortical development, and suggests that the age of auditory deprivation may significantly affect auditory areal cartography. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Mørch-Johnsen, Lynn; Nesvåg, Ragnar; Jørgensen, Kjetil N; Lange, Elisabeth H; Hartberg, Cecilie B; Haukvik, Unn K; Kompus, Kristiina; Westerhausen, René; Osnes, Kåre; Andreassen, Ole A; Melle, Ingrid; Hugdahl, Kenneth; Agartz, Ingrid
Neuroimaging studies have demonstrated associations between smaller auditory cortex volume and auditory hallucinations (AH) in schizophrenia. Reduced cortical volume can result from a reduction of either cortical thickness or cortical surface area, which may reflect different neuropathology. We investigate for the first time how thickness and surface area of the auditory cortex relate to AH in a large sample of schizophrenia spectrum patients. Schizophrenia spectrum (n = 194) patients underwent magnetic resonance imaging. Mean cortical thickness and surface area in auditory cortex regions (Heschl's gyrus [HG], planum temporale [PT], and superior temporal gyrus [STG]) were compared between patients with (AH+, n = 145) and without (AH-, n = 49) a lifetime history of AH and 279 healthy controls. AH+ patients showed significantly thinner cortex in the left HG compared to AH- patients (d = 0.43, P = .0096). There were no significant differences between AH+ and AH- patients in cortical thickness in the PT or STG, or in auditory cortex surface area in any of the regions investigated. Group differences in cortical thickness in the left HG was not affected by duration of illness or current antipsychotic medication. AH in schizophrenia patients were related to thinner cortex, but not smaller surface area of the left HG, a region which includes the primary auditory cortex. The results support that structural abnormalities of the auditory cortex underlie AH in schizophrenia. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: firstname.lastname@example.org.
Simon, E; Perrot, X; Mertens, P
The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.
León, Alex; Elgueda, Diego; Silva, María A.; Hamamé, Carlos M.; Delano, Paul H.
Background The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. Methodology/Principal Findings Cochlear microphonics (CM), auditory-nerve compound action potentials (CAP) and auditory cortex evoked potentials (ACEP) were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments) and a permanent reduction in five chinchillas (lesion experiments). We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. Conclusions/Significance These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the obtained effects
Rabang, Cal F; Lin, Jeff; Wu, Guangying K
The auditory system detects and processes dynamic sound information transmitted in the environment. Other than the basic acoustic parameters, such as frequency, amplitude and phase, the time-varying changes of these parameters must also be encoded in our brain. Frequency-modulated (FM) sound is socially and environmentally significant, and the direction of FM sweeps is essential for animal communication and human speech. Many auditory neurons selectively respond to the directional change of such FM signals. In the past half century, our knowledge of auditory representation and processing has been updated frequently, due to technological advancement. Recently, in vivo whole-cell voltage clamp recordings have been applied to different brain regions in sensory systems. These recordings illustrate the synaptic mechanisms underlying basic sensory information processing and provide profound insights toward our understanding of neural circuits for complex signal analysis. In this review, we summarize the major findings of direction selectivity at several key auditory regions and emphasize on the recent discoveries on the synaptic mechanisms for direction selectivity in the auditory system. We conclude this review by describing promising technical developments in dissecting neural circuits and future directions in the study of complex sound analysis.
Kamiya, K; Takahashi, K; Kitamura, K; Momoi, T; Yoshikawa, Y
The mouse auditory neurons, hair cells and their supporting cells in the cochlea are considered to be generated mainly in the embryonic days and to be sustained throughout the whole life. In the present study, however, we observed that auditory ganglion cells in the spiral ganglia undergo apoptosis and mitosis in the suckling mouse (1- to 2-week-old C3H/HeJ mice) with a normal auditory system. In spiral ganglia at postnatal days 7 (P7) and 10 (P10), TUNEL (TdT-mediated dUTP nick-end labeling)-positive and morphologically apoptotic ganglion cells were found. Furthermore, by bromodeoxyuridine labeling, mitosis of auditory ganglion cells was found at P10 to P14. In a functional study of auditory brainstem response, we demonstrated that the C3H/HeJ mouse acquires the ability to hear airborne sound at P12 and this is the same time as the opening of their external acoustic meatus (EAM). These results indicate that C3H/HeJ auditory ganglion cells have the ability to proliferate even after opening of the EAM and the initial input of airborne sound. We found that postnatal apoptosis and mitosis after P7 also occurred in the greater epithelial ridge (GER) which is an important organ for maturation of the organ of Corti and is located around the inner hair cells. This indicates that GER cells are not only degenerated but also regenerated until their disappearance around P12. This is the first report on mammals to demonstrate that neuronal mitosis of spiral ganglion cells and that of GER cells occur not only in embryonic and neonatal development but also in postnatal development of the normal auditory system.
Bidelman, Gavin M; Patro, Chhayakanta
When noise obstructs portions of target sounds the auditory system fills in missing information, a phenomenon known as auditory restoration or induction. Previous work in animal models demonstrates that neurons in primary auditory cortex (A1) are capable of restoring occluded target signals suggesting that early auditory cortex is capable of inducing continuity in discontinuous signals (i.e., endogenous restoration). Current consensus is that the neural correlates of auditory induction and perceptual restoration emerge no earlier than A1. Moreover, the neural mechanisms supporting induction in humans are poorly understood. Here, we show that in human listeners, auditory brainstem nuclei support illusory auditory continuity well before engagement of cerebral cortex. We recorded brainstem responses to modulated target tones that did or did not promote illusory auditory percepts. Auditory continuity was manipulated by introducing masking noise or brief temporal interruptions in otherwise continuous tones. We found that auditory brainstem responses paralleled illusory continuity by tagging target sounds even when they were occluded by the auditory scene. Our results reveal (i) a pre-attentive, subcortical origin to a presumed cortical function and (ii) that brainstem signal processing helps partially cancel the negative effects of masking by restoring missing portions of auditory objects that are fragmented in the soundscape.
Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier
Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing.
Yan, Kai; Tang, Ye-Zhong; Carr, Catherine E.
Geckos use vocalizations for intraspecific communication, but little is known about the organization of their central auditory system. We therefore used antibodies against the calcium-binding proteins calretinin (CR), parvalbumin (PV), and calbindin-D28k (CB) to characterize the gecko auditory system. We also examined expression of both glutamic acid decarboxlase (GAD) and synaptic vesicle protein (SV2). Western blots showed that these antibodies are specific to gecko brain. All three calcium-binding proteins were expressed in the auditory nerve, and CR immunoreactivity labeled the first-order nuclei and delineated the terminal fields associated with the ascending projections from the first-order auditory nuclei. PV expression characterized the superior olivary nuclei, whereas GAD immunoreactivity characterized many neurons in the nucleus of the lateral lemniscus and some neurons in the torus semicircularis. In the auditory midbrain, the distribution of CR, PV, and CB characterized divisions within the central nucleus of the torus semicircularis. All three calcium-binding proteins were expressed in nucleus medialis of the thalamus. These expression patterns are similar to those described for other vertebrates. PMID:20589907
Tron, Nanina; Stölting, Heiko; Kampschulte, Marian; Martels, Gunhild; Stumpner, Andreas; Lakes-Harlan, Reinhard
Several taxa of insects evolved a tympanate ear at different body positions, whereby the ear is composed of common parts: a scolopidial sense organ, a tracheal air space, and a tympanal membrane. Here, we analyzed the anatomy and physiology of the ear at the ventral prothorax of the sarcophagid fly, Emblemasoma auditrix (Soper). We used micro-computed tomography to analyze the ear and its tracheal air space in relation to the body morphology. Both tympana are separated by a small cuticular bridge, face in the same frontal direction, and are backed by a single tracheal enlargement. This enlargement is connected to the anterior spiracles at the dorsofrontal thorax and is continuous with the tracheal network in the thorax and in the abdomen. Analyses of responses of auditory afferents and interneurons show that the ear is broadly tuned, with a sensitivity peak at 5 kHz. Single-cell recordings of auditory interneurons indicate a frequency- and intensity-dependent tuning, whereby some neurons react best to 9 kHz, the peak frequency of the host’s calling song. The results are compared to the convergently evolved ear in Tachinidae (Diptera). PMID:27538415
Imaizumi, Kazuo; Lee, Charles C
The auditory lemniscal thalamocortical (TC) pathway conveys information from the ventral division of the medial geniculate body to the primary auditory cortex (A1). Although their general topographic organization has been well characterized, functional transformations at the lemniscal TC synapse still remain incompletely codified, largely due to the need for integration of functional anatomical results with the variability observed with various animal models and experimental techniques. In this review, we discuss these issues with classical approaches, such as in vivo extracellular recordings and tracer injections to physiologically identified areas in A1, and then compare these studies with modern approaches, such as in vivo two-photon calcium imaging, in vivo whole-cell recordings, optogenetic methods, and in vitro methods using slice preparations. A surprising finding from a comparison of classical and modern approaches is the similar degree of convergence from thalamic neurons to single A1 neurons and clusters of A1 neurons, although, thalamic convergence to single A1 neurons is more restricted from areas within putative thalamic frequency lamina. These comparisons suggest that frequency convergence from thalamic input to A1 is functionally limited. Finally, we consider synaptic organization of TC projections and future directions for research.
Montgomery, Joyce; Storey, Keith; Post, Michal; Lemley, Jacky
In this study a self-operated auditory prompting system is introduced to determine if it can increase the on-task behavior for two students with autism participating in an employment training program. In addition, the amount of prompts provided by support staff is measured. The self-operated auditory prompting system consisted of tape recordings…
Sininger, Y S; Doyle, K J; Moore, J K
Human infants spend the first year of life learning about their environment through experience. Although it is not visible to observers, infants with hearing are learning to process speech and understand language and are quite linguistically sophisticated by 1 year of age. At this same time, the neurons in the auditory brain stem are maturing, and billions of major neural connections are being formed. During this time, the auditory brain stem and thalamus are just beginning to connect to the auditory cortex. When sensory input to the auditory nervous system is interrupted, especially during early development, the morphology and functional properties of neurons in the central auditory system can break down. In some instances, these deleterious effects of lack of sound input can be ameliorated by reintroduction of stimulation, but critical periods may exist for intervention. Hearing loss in newborn infants can go undetected until as late as 2 years of age without specialized testing. When hearing loss is detected in the newborn period, infants can benefit from amplification (hearing aids) and intervention to facilitate speech and language development. All evidence regarding neural development supports such early intervention for maximum development of communication ability and hearing in infants.
Chrostowski, Michael; Salvi, Richard J.; Allman, Brian L.
A high dose of sodium salicylate temporarily induces tinnitus, mild hearing loss, and possibly hyperacusis in humans and other animals. Salicylate has well-established effects on cochlear function, primarily resulting in the moderate reduction of auditory input to the brain. Despite decreased peripheral sensitivity and output, salicylate induces a paradoxical enhancement of the sound-evoked field potential at the level of the primary auditory cortex (A1). Previous electrophysiologic studies have begun to characterize changes in thalamorecipient layers of A1; however, A1 is a complex neural circuit with recurrent intracortical connections. To describe the effects of acute systemic salicylate treatment on both thalamic and intracortical sound-driven activity across layers of A1, we applied current-source density (CSD) analysis to field potentials sampled across cortical layers in the anesthetized rat. CSD maps were normally characterized by a large, short-latency, monosynaptic, thalamically driven sink in granular layers followed by a lower amplitude, longer latency, polysynaptic, intracortically driven sink in supragranular layers. Following systemic administration of salicylate, there was a near doubling of both granular and supragranular sink amplitudes at higher sound levels. The supragranular sink amplitude input/output function changed from becoming asymptotic at approximately 50 dB to sharply nonasymptotic, often dominating the granular sink amplitude at higher sound levels. The supragranular sink also exhibited a significant decrease in peak latency, reflecting an acceleration of intracortical processing of the sound-evoked response. Additionally, multiunit (MU) activity was altered by salicylate; the normally onset/sustained MU response type was transformed into a primarily onset response type in granular and infragranular layers. The results from CSD analysis indicate that salicylate significantly enhances sound-driven response via intracortical circuits
Shafer, Valerie L; Yu, Yan H; Wagner, Monica
The goal of the current analysis was to examine the maturation of cortical auditory evoked potentials (CAEPs) from three months of age to eight years of age. The superior frontal positive-negative-positive sequence (P1, N2, P2) and the temporal site, negative-positive-negative sequence (possibly, Na, Ta, Tb of the T-complex) were examined. Event-related potentials were recorded from 63 scalp sites to a 250-ms vowel. Amplitude and latency of peaks were measured at left and right frontal sites (near Fz) and at left and right temporal sites (T7 and T8). In addition, the largest peak (typically corresponding to P1) was selected from global field power (GFP). The results revealed a large positive peak (P1) easily identified at frontal sites across all ages. The N2 emerged after 6 months of age and the following P2 between 8 and 30 months of age. The latencies of these peaks decreased exponentially with the most rapid decrease observed for P1. For amplitude, only P1 showed a clear relationship with age, becoming more positive in a somewhat linear fashion. At the temporal sites only a negative peak, which might be Na, was clearly observed at both left and right sites in children older than 14 months and peaking between 100 and 200 ms. P1 measures at frontal sites and Na peak latencies were moderately correlated. The temporal negative peak latency showed a different maturational timecourse (linear in nature) than the P1 peak, suggesting at least partial independence. Distinct Ta (positive) and Tb (negative) peaks, following Na and peaking between 120 and 220 ms were not consistently found in most age groups of children, except Ta which was present in 7 year olds. Future research, which includes manipulation of stimulus factors, and use of modeling techniques will be needed to explain the apparent, protracted maturation of the temporal site measures in the current study. Copyright © 2014 Elsevier B.V. All rights reserved.
In this study variation of human auditory evoked mismatch field amplitudes in response to complex tones as a function of the removal in single partials in the onset period was investigated. It was determined: 1-A single frequency elimination in a sound stimulus plays a significant role in human brain sound recognition. 2-By comparing the mismatches of the brain response due to a single frequency elimination in the "Starting Transient" and "Sustain Part" of the sound stimulus, it is found that the brain is more sensitive to frequency elimination in the Starting Transient. This study involves 4 healthy subjects with normal hearing. Neural activity was recorded with stimulus whole-head MEG. Verification of spatial location in the auditory cortex was determined by comparing with MRI images. In the first set of stimuli, repetitive ('standard') tones with five selected onset frequencies were randomly embedded in the string of rare ('deviant') tones with randomly varying inter stimulus intervals. In the deviant tones one of the frequency components was omitted relative to the deviant tones during the onset period. The frequency of the test partial of the complex tone was intentionally selected to preclude its reinsertion by generation of harmonics or combination tones due to either the nonlinearity of the ear, the electronic equipment or the brain processing. In the second set of stimuli, time structured as above, repetitive ('standard') tones with five selected sustained frequency components were embedded in the string of rare '(deviant') tones for which one of these selected frequencies was omitted in the sustained tone. In both measurements, the carefully frequency selection precluded their reinsertion by generation of harmonics or combination tones due to the nonlinearity of the ear, the electronic equipment and brain processing. The same considerations for selecting the test frequency partial were applied. Results. By comparing MMN of the two data sets, the relative