Learning Disability Assessed through Audiologic and Physiologic Measures: A Case Study.
ERIC Educational Resources Information Center
Greenblatt, Edward R.; And Others
1983-01-01
The report describes a child with central auditory dysfunction, the first reported case where brain-stem dysfunction on audiologic tests were associated with specific electrophysiologic changes in the brain-stem auditory-evoked responses. (Author/CL)
Tarasenko, Melissa A; Swerdlow, Neal R; Makeig, Scott; Braff, David L; Light, Gregory A
2014-01-01
Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker - the auditory brain-stem response (ABR) to complex sounds (cABR) - that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions.
Tarasenko, Melissa A.; Swerdlow, Neal R.; Makeig, Scott; Braff, David L.; Light, Gregory A.
2014-01-01
Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker – the auditory brain-stem response (ABR) to complex sounds (cABR) – that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions. PMID:25352811
Efficacy of Human Adipose Tissue-Derived Stem Cells on Neonatal Bilirubin Encephalopathy in Rats.
Amini, Naser; Vousooghi, Nasim; Hadjighassem, Mahmoudreza; Bakhtiyari, Mehrdad; Mousavi, Neda; Safakheil, Hosein; Jafari, Leila; Sarveazad, Arash; Yari, Abazar; Ramezani, Sara; Faghihi, Faezeh; Joghataei, Mohammad Taghi
2016-05-01
Kernicterus is a neurological syndrome associated with indirect bilirubin accumulation and damages to the basal ganglia, cerebellum and brain stem nuclei particularly the cochlear nucleus. To mimic haemolysis in a rat model such that it was similar to what is observed in a preterm human, we injected phenylhydrazine in 7-day-old rats to induce haemolysis and then infused sulfisoxazole into the same rats at day 9 to block bilirubin binding sites in the albumin. We have investigated the effectiveness of human adiposity-derived stem cells as a therapeutic paradigm for perinatal neuronal repair in a kernicterus animal model. The level of total bilirubin, indirect bilirubin, brain bilirubin and brain iron was significantly increased in the modelling group. There was a significant decreased in all severity levels of the auditory brainstem response test in the two modelling group. Akinesia, bradykinesia and slip were significantly declined in the experience group. Apoptosis in basal ganglia and cerebellum were significantly decreased in the stem cell-treated group in comparison to the vehicle group. All severity levels of the auditory brainstem response tests were significantly decreased in 2-month-old rats. Transplantation results in the substantial alleviation of walking impairment, apoptosis and auditory dysfunction. This study provides important information for the development of therapeutic strategies using human adiposity-derived stem cells in prenatal brain damage to reduce potential sensori motor deficit.
Electrophysiological measurement of human auditory function
NASA Technical Reports Server (NTRS)
Galambos, R.
1975-01-01
Contingent negative variations in the presence and amplitudes of brain potentials evoked by sound are considered. Evidence is produced that the evoked brain stem response to auditory stimuli is clearly related to brain events associated with cognitive processing of acoustic signals since their properties depend upon where the listener directs his attention, whether the signal is an expected event or a surprise, and when sound that is listened-for is heard at last.
Brain stem auditory evoked responses in human infants and adults
NASA Technical Reports Server (NTRS)
Hecox, K.; Galambos, R.
1974-01-01
Brain stem evoked potentials were recorded by conventional scalp electrodes in infants (3 weeks to 3 years of age) and adults. The latency of one of the major response components (wave V) is shown to be a function both of click intensity and the age of the subject; this latency at a given signal strength shortens postnatally to reach the adult value (about 6 msec) by 12 to 18 months of age. The demonstrated reliability and limited variability of these brain stem electrophysiological responses provide the basis for an optimistic estimate of their usefulness as an objective method for assessing hearing in infants and adults.
Neurophysiologic intraoperative monitoring of the vestibulocochlear nerve.
Simon, Mirela V
2011-12-01
Neurosurgical procedures involving the skull base and structures within can pose a significant risk of damage to the brain stem and cranial nerves. This can have life-threatening consequences and/or result in devastating neurologic deficits. Over the past decade, intraoperative neurophysiology has significantly evolved and currently offers a great tool for live monitoring of the integrity of nervous structures. Thus, dysfunction can be identified early and prompt modification of the surgical management or operating conditions, leads to avoidance of permanent structural damage.Along these lines, the vestibulocochlear nerve (CN VIII) and, to a greater extent, the auditory pathways as they pass through the brain stem are especially at risk during cerebelopontine angle (CPA), posterior/middle fossa, or brain stem surgery. CN VIII can be damaged by several mechanisms, from vascular compromise to mechanical injury by stretch, compression, dissection, and heat injury. Additionally, cochlea itself can be significantly damaged during temporal bone drilling, by noise, mechanical destruction, or infarction, and because of rupture, occlusion, or vasospasm of the internal auditory artery.CN VIII monitoring can be successfully achieved by live recording of the function of one of its parts, the cochlear or auditory nerve (AN), using the brain stem auditory evoked potentials (BAEPs), electrocochleography (ECochG), and compound nerve action potentials (CNAPs) of the cochlear nerve.This is a review of these techniques, their principle, applications, methodology, interpretation of the evoked responses, and their change from baseline, within the context of surgical and anesthesia environments, and finally the appropriate management of these changes.
Thirumala, Parthasarathy D; Krishnaiah, Balaji; Crammond, Donald J; Habeych, Miguel E; Balzer, Jeffrey R
2014-04-01
Intraoperative monitoring of brain stem auditory evoked potential during microvascular decompression (MVD) prevent hearing loss (HL). Previous studies have shown that changes in wave III (wIII) are an early and sensitive sign of auditory nerve injury. To evaluate the changes of amplitude and latency of wIII of brain stem auditory evoked potential during MVD and its association with postoperative HL. Hearing loss was classified by American Academy of Otolaryngology - Head and Neck Surgery (AAO-HNS) criteria, based on changes in pure tone audiometry and speech discrimination score. Retrospective analysis of wIII in patients who underwent intraoperative monitoring with brain stem auditory evoked potential during MVD was performed. A univariate logistic regression analysis was performed on independent variables amplitude of wIII and latency of wIII at change max and On-Skin, or a final recording at the time of skin closure. A further analysis for the same variables was performed adjusting for the loss of wave. The latency of wIII was not found to be significantly different between groups I and II. The amplitude of wIII was significantly decreased in the group with HL. Regression analysis did not find any increased odds of HL with changes in the amplitude of wIII. Changes in wave III did not increase the odds of HL in patients who underwent brain stem auditory evoked potential s during MVD. This information might be valuable to evaluate the value of wIII as an alarm criterion during MVD to prevent HL.
Maksimova, M Yu; Sermagambetova, Zh N; Skrylev, S I; Fedin, P A; Koshcheev, A Yu; Shchipakin, V L; Sinicyn, I A
To assess brain stem dysfunction in patients with hemodynamically significant stenosis of vertebral arteries (VA) using short latency brainstem auditory evoked potentials (BAEP). The study group included 50 patients (mean age 64±6 years) with hemodynamically significant extracranial VA stenosis. Patients with hemodynamically significant extracranial VA stenosis had BAEP abnormalities including the elongation of interpeak intervals I-V and peak V latency as well as the reduction of peak I amplitude. After transluminal balloon angioplasty with stenting of VA stenoses, there was a shortening of peak V latency compared to the preoperative period that reflected the improvement of brain stem conductive functions. Atherostenosis of vertebral arteries is characterized by the signs of brain stem dysfunction, predominantly in the pontomesencephal brain stem. After transluminal balloon angioplasty with stenting of VA, the improvement of brain stem conductive functions was observed.
Hirai, Yasuharu; Nishino, Eri
2015-01-01
Despite its widespread use, high-resolution imaging with multiphoton microscopy to record neuronal signals in vivo is limited to the surface of brain tissue because of limited light penetration. Moreover, most imaging studies do not simultaneously record electrical neural activity, which is, however, crucial to understanding brain function. Accordingly, we developed a photometric patch electrode (PME) to overcome the depth limitation of optical measurements and also enable the simultaneous recording of neural electrical responses in deep brain regions. The PME recoding system uses a patch electrode to excite a fluorescent dye and to measure the fluorescence signal as a light guide, to record electrical signal, and to apply chemicals to the recorded cells locally. The optical signal was analyzed by either a spectrometer of high light sensitivity or a photomultiplier tube depending on the kinetics of the responses. We used the PME in Oregon Green BAPTA-1 AM-loaded avian auditory nuclei in vivo to monitor calcium signals and electrical responses. We demonstrated distinct response patterns in three different nuclei of the ascending auditory pathway. On acoustic stimulation, a robust calcium fluorescence response occurred in auditory cortex (field L) neurons that outlasted the electrical response. In the auditory midbrain (inferior colliculus), both responses were transient. In the brain-stem cochlear nucleus magnocellularis, calcium response seemed to be effectively suppressed by the activity of metabotropic glutamate receptors. In conclusion, the PME provides a powerful tool to study brain function in vivo at a tissue depth inaccessible to conventional imaging devices. PMID:25761950
Hirai, Yasuharu; Nishino, Eri; Ohmori, Harunori
2015-06-01
Despite its widespread use, high-resolution imaging with multiphoton microscopy to record neuronal signals in vivo is limited to the surface of brain tissue because of limited light penetration. Moreover, most imaging studies do not simultaneously record electrical neural activity, which is, however, crucial to understanding brain function. Accordingly, we developed a photometric patch electrode (PME) to overcome the depth limitation of optical measurements and also enable the simultaneous recording of neural electrical responses in deep brain regions. The PME recoding system uses a patch electrode to excite a fluorescent dye and to measure the fluorescence signal as a light guide, to record electrical signal, and to apply chemicals to the recorded cells locally. The optical signal was analyzed by either a spectrometer of high light sensitivity or a photomultiplier tube depending on the kinetics of the responses. We used the PME in Oregon Green BAPTA-1 AM-loaded avian auditory nuclei in vivo to monitor calcium signals and electrical responses. We demonstrated distinct response patterns in three different nuclei of the ascending auditory pathway. On acoustic stimulation, a robust calcium fluorescence response occurred in auditory cortex (field L) neurons that outlasted the electrical response. In the auditory midbrain (inferior colliculus), both responses were transient. In the brain-stem cochlear nucleus magnocellularis, calcium response seemed to be effectively suppressed by the activity of metabotropic glutamate receptors. In conclusion, the PME provides a powerful tool to study brain function in vivo at a tissue depth inaccessible to conventional imaging devices. Copyright © 2015 the American Physiological Society.
Impact of mild traumatic brain injury on auditory brain stem dysfunction in mouse model.
Amanipour, Reza M; Frisina, Robert D; Cresoe, Samantha A; Parsons, Teresa J; Xiaoxia Zhu; Borlongan, Cesario V; Walton, Joseph P
2016-08-01
The auditory brainstem response (ABR) is an electrophysiological test that examines the functionality of the auditory nerve and brainstem. Traumatic brain injury (TBI) can be detected if prolonged peak latency is observed in ABR measurements, since latency measures the neural conduction time in the brainstem, and an increase in latency can be a sign of pathological lesion at the auditory brainstem level. The ABR is elicited by brief sounds that can be used to measure hearing sensitivity as well as temporal processing. Reduction in peak amplitudes and increases in latency are indicative of dysfunction in the auditory nerve and/or central auditory pathways. In this study we used sixteen young adult mice that were divided into two groups: sham and mild traumatic brain injury (mTBI), with ABR measurements obtained prior to, and at 2, 6, and 14 weeks after injury. Abnormal ABRs were observed for the nine TBI cases as early as two weeks after injury and the deficits lasted for fourteen weeks after injury. Results indicated a significant reduction in the Peak 1 (P1) and Peak 4 (P4) amplitudes to the first noise burst, as well as an increase in latency response for P1 and P4 following mTBI. These results are the first to demonstrate auditory sound processing deficits in a rodent model of mild TBI.
Radwan, Heba Mohammed; El-Gharib, Amani Mohamed; Erfan, Adel Ali; Emara, Afaf Ahmad
2017-05-01
Delay in ABR and CAEPs wave latencies in children with type 1DM indicates that there is abnormality in the neural conduction in DM patients. The duration of DM has greater effect on auditory function than the control of DM. Diabetes mellitus (DM) is a common endocrine and metabolic disorder. Evoked potentials offer the possibility to perform a functional evaluation of neural pathways in the central nervous system. To investigate the effect of type 1 diabetes mellitus (T1DM) on auditory brain stem response (ABR) and cortical evoked potentials (CAEPs). This study included two groups: a control group (GI), which consisted of 20 healthy children with normal peripheral hearing, and a study group (GII), which consisted of 30 children with type I DM. Basic audiological evaluation, ABR, and CAEPs were done in both groups. Delayed absolute latencies of ABR and CAEPs waves were found. Amplitudes showed no significant difference between both groups. Positive correlation was found between ABR wave latencies and duration of DM. No correlation was found between ABR, CAEPs, and glycated hemoglobin.
Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix
2015-01-15
Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing. Copyright © 2015 the American Physiological Society.
Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.
Bauer, Martin; Trahms, Lutz; Sander, Tilmann
2015-04-01
The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.
The Middle Latency Response (MLR) and Steady State Evoked Potential (SSEP) in Neonates.
1985-05-01
diagnostic audiologic information will enhance habilitation efforts in prescribing hearing aids and designing appropriate language intervention strategies...auditory evoked brain stem response. A study of patients with sensory hearing loss. SCANDINAVIAN AUDIOLOGY 8: 67-70, 1979. Page 165 "- FILMED 10-85 DTIC * 4 N . . -. N
A novel method of brainstem auditory evoked potentials using complex verbal stimuli.
Kouni, Sophia N; Koutsojannis, Constantinos; Ziavra, Nausika; Giannopoulos, Sotirios
2014-08-01
The click and tone-evoked auditory brainstem responses are widely used in clinical practice due to their consistency and predictability. More recently, the speech-evoked responses have been used to evaluate subcortical processing of complex signals, not revealed by responses to clicks and tones. Disyllable stimuli corresponding to familiar words can induce a pattern of voltage fluctuations in the brain stem resulting in a familiar waveform, and they can yield better information about brain stem nuclei along the ascending central auditory pathway. We describe a new method with the use of the disyllable word "baba" corresponding to English "daddy" that is commonly used in many other ethnic languages spanning from West Africa to the Eastern Mediterranean all the way to the East Asia. This method was applied in 20 young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) who were matched with 20 sex, age, education, hearing sensitivity, and IQ-matched normal subjects. The absolute peak latencies of the negative wave C and the interpeak latencies of A-C elicited by verbal stimuli "baba" were found to be significantly increased in the dyslexic group in comparison with the control group. The method is easy and helpful to diagnose abnormalities affecting the auditory pathway, to identify subjects with early perception and cortical representation abnormalities, and to apply the suitable therapeutic and rehabilitation management.
1994-07-01
psychological refractory period 15. Two-flash threshold 16. Critical flicker fusion (CFF) 17. Steady state visually evoked response 18. Auditory brain stem...States of awareness I: Subliminal erceoption relationships to situational awareness (AL-TR-1992-0085). Brooks Air Force BaSe, TX: Armstrong...the signals required different inputs (e.g., visual versus auditory ) (Colley & Beech, 1989). Despite support of this theory from such experiments
Jiang, Ze Dong
2013-08-01
Neurodevelopment in late preterm infants has recently attracted considerable interest. The prevalence of brain stem conduction abnormality remains unknown. We examined maximum length sequence brain stem auditory evoked response in 163 infants, born at 33-36 weeks gestation, who had various perinatal problems. Compared with 49 normal term infants without problems, the late preterm infants showed a significant increase in III-V and I-V interpeak intervals at all 91-910/s clicks, particularly at 455 and 910/s (p < 0.01-0.001). The I-III interval was slightly increased, without statistically significant difference from the controls at any click rates. These results suggest that neural conduction along the, mainly more central or rostral part of, auditory brain stem is abnormal in late preterm infants with perinatal problems. Of the 163 late preterm infant, the number (and percentage rate) of infants with abnormal I-V interval at 91, 227, 455, and 910/s clicks was, respectively, 11 (6.5%), 17 (10.2%), 37 (22.3%), and 31 (18.7%). The number (and percentage rate) of infants with abnormal III-V interval at these rates was, respectively, 10 (6.0%), 17 (10.2%), 28 (16.9), and 36 (21.2%). Apparently, the abnormal rates were much higher at 455 and 910/s clicks than at lower rates 91 and 227/s. In total, 42 (25.8%) infants showed abnormal I-V and/or III-V intervals. Conduction in, mainly in the more central part, the brain stem is abnormal in late preterm infants with perinatal problems. The abnormality is more detectable at high- than at low-rate sensory stimulation. A quarter of late preterm infants with perinatal problems have brain stem conduction abnormality.
A Novel Method of Brainstem Auditory Evoked Potentials Using Complex Verbal Stimuli
Kouni, Sophia N; Koutsojannis, Constantinos; Ziavra, Nausika; Giannopoulos, Sotirios
2014-01-01
Background: The click and tone-evoked auditory brainstem responses are widely used in clinical practice due to their consistency and predictability. More recently, the speech-evoked responses have been used to evaluate subcortical processing of complex signals, not revealed by responses to clicks and tones. Aims: Disyllable stimuli corresponding to familiar words can induce a pattern of voltage fluctuations in the brain stem resulting in a familiar waveform, and they can yield better information about brain stem nuclei along the ascending central auditory pathway. Materials and Methods: We describe a new method with the use of the disyllable word “baba” corresponding to English “daddy” that is commonly used in many other ethnic languages spanning from West Africa to the Eastern Mediterranean all the way to the East Asia. Results: This method was applied in 20 young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) who were matched with 20 sex, age, education, hearing sensitivity, and IQ-matched normal subjects. The absolute peak latencies of the negative wave C and the interpeak latencies of A-C elicited by verbal stimuli “baba” were found to be significantly increased in the dyslexic group in comparison with the control group. Conclusions: The method is easy and helpful to diagnose abnormalities affecting the auditory pathway, to identify subjects with early perception and cortical representation abnormalities, and to apply the suitable therapeutic and rehabilitation management. PMID:25210677
Breath-holding spells may be associated with maturational delay in myelination of brain stem.
Vurucu, Sebahattin; Karaoglu, Abdulbaki; Paksu, Sukru M; Oz, Oguzhan; Yaman, Halil; Gulgun, Mustafa; Babacan, Oguzhan; Unay, Bulent; Akin, Ridvan
2014-02-01
To evaluate possible contribution of maturational delay of brain stem in the etiology of breath-holding spells in children using brain stem auditory evoked potentials. The study group included children who experienced breath-holding spells. The control group consisted of healthy age- and sex-matched children. Age, gender, type and frequency of spell, hemoglobin, and ferritin levels in study group and brain stem auditory evoked potentials results in both groups were recorded. Study group was statistically compared with control group for brain stem auditory evoked potentials. The mean age of study and control groups was 26.3 ± 14.6 and 28.9 ± 13.9 months, respectively. The III-V and I-V interpeak latencies were significantly prolonged in the study group compared with the control group (2.07 ± 0.2 milliseconds; 1.92 ± 0.13 milliseconds and 4.00 ± 0.27 milliseconds; 3.83 ± 0.19 milliseconds; P = 0.009 and P = 0.03, respectively). At the same time, III-V and I-V interpeak latencies of patients without anemia in the study group compared with those of control group were significantly prolonged (2.09 ± 0.24 milliseconds; 1.92 ± 0.13 milliseconds and 4.04 ± 0.28 milliseconds; 3.83 ± 0.19 milliseconds; P = 0.007 and P = 0.01, respectively). Our results consider that maturational delay in myelination of brain stem may have a role in the etiology of breath-holding spells in children.
Altschuler, R A; Dolan, D F; Halsey, K; Kanicki, A; Deng, N; Martin, C; Eberle, J; Kohrman, D C; Miller, R A; Schacht, J
2015-04-30
This study compared the timing of appearance of three components of age-related hearing loss that determine the pattern and severity of presbycusis: the functional and structural pathologies of sensory cells and neurons and changes in gap detection (GD), the latter as an indicator of auditory temporal processing. Using UM-HET4 mice, genetically heterogeneous mice derived from four inbred strains, we studied the integrity of inner and outer hair cells by position along the cochlear spiral, inner hair cell-auditory nerve connections, spiral ganglion neurons (SGN), and determined auditory thresholds, as well as pre-pulse and gap inhibition of the acoustic startle reflex (ASR). Comparisons were made between mice of 5-7, 22-24 and 27-29 months of age. There was individual variability among mice in the onset and extent of age-related auditory pathology. At 22-24 months of age a moderate to large loss of outer hair cells was restricted to the apical third of the cochlea and threshold shifts in the auditory brain stem response were minimal. There was also a large and significant loss of inner hair cell-auditory nerve connections and a significant reduction in GD. The expression of Ntf3 in the cochlea was significantly reduced. At 27-29 months of age there was no further change in the mean number of synaptic connections per inner hair cell or in GD, but a moderate to large loss of outer hair cells was found across all cochlear turns as well as significantly increased ABR threshold shifts at 4, 12, 24 and 48 kHz. A statistical analysis of correlations on an individual animal basis revealed that neither the hair cell loss nor the ABR threshold shifts correlated with loss of GD or with the loss of connections, consistent with independent pathological mechanisms. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Hydrogel limits stem cell dispersal in the deaf cochlea: implications for cochlear implants
NASA Astrophysics Data System (ADS)
Nayagam, Bryony A.; Backhouse, Steven S.; Cimenkaya, Cengiz; Shepherd, Robert K.
2012-12-01
Auditory neurons provide the critical link between a cochlear implant and the brain in deaf individuals, therefore their preservation and/or regeneration is important for optimal performance of this neural prosthesis. In cases where auditory neurons are significantly depleted, stem cells (SCs) may be used to replace the lost population of neurons, thereby re-establishing the critical link between the periphery (implant) and the brain. For such a therapy to be therapeutically viable, SCs must be differentiated into neurons, retained at their delivery site and damage caused to the residual auditory neurons minimized. Here we describe the transplantation of SC-derived neurons into the deaf cochlea, using a peptide hydrogel to limit their dispersal. The described approach illustrates that SCs can be delivered to and are retained within the basal turn of the cochlea, without a significant loss of endogenous auditory neurons. In addition, the tissue response elicited from this surgical approach was restricted to the surgical site and did not extend beyond the cochlear basal turn. Overall, this approach illustrates the feasibility of targeted cell delivery into the mammalian cochlea using hydrogel, which may be useful for future cell-based transplantation strategies, for combined treatment with a cochlear implant to restore function.
Specialization of the auditory processing in harbor porpoise, characterized by brain-stem potentials
NASA Astrophysics Data System (ADS)
Bibikov, Nikolay G.
2002-05-01
Brain-stem auditory evoked potentials (BAEPs) were recorded from the head surface of the three awaked harbor porpoises (Phocoena phocoena). Silver disk placed on the skin surface above the vertex bone was used as an active electrode. The experiments were performed at the Karadag biological station (the Crimea peninsula). Clicks and tone bursts were used as stimuli. The temporal and frequency selectivity of the auditory system was estimated using the methods of simultaneous and forward masking. An evident minimum of the BAEPs thresholds was observed in the range of 125-135 kHz, where the main spectral component of species-specific echolocation signal is located. In this frequency range the tonal forward masking demonstrated a strong frequency selectivity. Off-response to such tone bursts was a typical observation. An evident BAEP could be recorded up to the frequencies 190-200 kHz, however, outside the acoustical fovea the frequency selectivity was rather poor. Temporal resolution was estimated by measuring BAER recovery functions for double clicks, double tone bursts, and double noise bursts. The half-time of BAERs recovery was in the range of 0.1-0.2 ms. The data indicate that the porpoise auditory system is strongly adapted to detect ultrasonic closely spaced sounds like species-specific locating signals and echoes.
Dobek, Christine E; Beynon, Michaela E; Bosma, Rachael L; Stroman, Patrick W
2014-10-01
The oldest known method for relieving pain is music, and yet, to date, the underlying neural mechanisms have not been studied. Here, we investigate these neural mechanisms by applying a well-defined painful stimulus while participants listened to their favorite music or to no music. Neural responses in the brain, brain stem, and spinal cord were mapped with functional magnetic resonance imaging spanning the cortex, brain stem, and spinal cord. Subjective pain ratings were observed to be significantly lower when pain was administered with music than without music. The pain stimulus without music elicited neural activity in brain regions that are consistent with previous studies. Brain regions associated with pleasurable music listening included limbic, frontal, and auditory regions, when comparing music to non-music pain conditions. In addition, regions demonstrated activity indicative of descending pain modulation when contrasting the 2 conditions. These regions include the dorsolateral prefrontal cortex, periaqueductal gray matter, rostral ventromedial medulla, and dorsal gray matter of the spinal cord. This is the first imaging study to characterize the neural response of pain and how pain is mitigated by music, and it provides new insights into the neural mechanism of music-induced analgesia within the central nervous system. This article presents the first investigation of neural processes underlying music analgesia in human participants. Music modulates pain responses in the brain, brain stem, and spinal cord, and neural activity changes are consistent with engagement of the descending analgesia system. Copyright © 2014 American Pain Society. Published by Elsevier Inc. All rights reserved.
The vestibulocochlear nerve (VIII).
Benoudiba, F; Toulgoat, F; Sarrazin, J-L
2013-10-01
The vestibulocochlear nerve (8th cranial nerve) is a sensory nerve. It is made up of two nerves, the cochlear, which transmits sound and the vestibular which controls balance. It is an intracranial nerve which runs from the sensory receptors in the internal ear to the brain stem nuclei and finally to the auditory areas: the post-central gyrus and superior temporal auditory cortex. The most common lesions responsible for damage to VIII are vestibular Schwannomas. This report reviews the anatomy and various investigations of the nerve. Copyright © 2013. Published by Elsevier Masson SAS.
Arch-Tirado, Emilio; Collado-Corona, Miguel Angel; Morales-Martínez, José de Jesús
2004-01-01
amphibians, Frog catesbiana (frog bull, 30 animals); reptiles, Sceloporus torcuatus (common small lizard, 22 animals); birds: Columba livia (common dove, 20 animals), and mammals, Cavia porcellus, (guinea pig, 20 animals). With regard to lodging, all animals were maintained at the Institute of Human Communication Disorders, were fed with special food for each species, and had water available ad libitum. Regarding procedure, for carrying out analysis of auditory evoked potentials of brain stem SPL amphibians, birds, and mammals were anesthetized with ketamine 20, 25, and 50 mg/kg, by injection. Reptiles were anesthetized by freezing (6 degrees C). Study subjects had needle electrodes placed in an imaginary line on the half sagittal line between both ears and eyes, behind right ear, and behind left ear. Stimulation was carried out inside a no noise site by means of a horn in free field. The sign was filtered at between 100 and 3,000 Hz and analyzed in a computer for provoked potentials (Racia APE 78). In data shown by amphibians, wave-evoked responses showed greater latency than those of the other species. In reptiles, latency was observed as reduced in comparison with amphibians. In the case of birds, lesser latency values were observed, while in the case of guinea pigs latencies were greater than those of doves but they were stimulated by 10 dB, which demonstrated best auditory threshold in the four studied species. Last, it was corroborated that as the auditory threshold of each species it descends conforms to it advances in the phylogenetic scale. Beginning with these registrations, we care able to say that response for evoked brain stem potential showed to be more complex and lesser values of absolute latency as we advance along the phylogenetic scale; thus, the opposing auditory threshold is better agreement with regard to the phylogenetic scale among studied species. These data indicated to us that seeking of auditory information is more complex in more evolved species.
Gransier, Robin; Deprez, Hanne; Hofmann, Michael; Moonen, Marc; van Wieringen, Astrid; Wouters, Jan
2016-05-01
Previous studies have shown that objective measures based on stimulation with low-rate pulse trains fail to predict the threshold levels of cochlear implant (CI) users for high-rate pulse trains, as used in clinical devices. Electrically evoked auditory steady-state responses (EASSRs) can be elicited by modulated high-rate pulse trains, and can potentially be used to objectively determine threshold levels of CI users. The responsiveness of the auditory pathway of profoundly hearing-impaired CI users to modulation frequencies is, however, not known. In the present study we investigated the responsiveness of the auditory pathway of CI users to a monopolar 500 pulses per second (pps) pulse train modulated between 1 and 100 Hz. EASSRs to forty-three modulation frequencies, elicited at the subject's maximum comfort level, were recorded by means of electroencephalography. Stimulation artifacts were removed by a linear interpolation between a pre- and post-stimulus sample (i.e., blanking). The phase delay across modulation frequencies was used to differentiate between the neural response and a possible residual stimulation artifact after blanking. Stimulation artifacts were longer than the inter-pulse interval of the 500pps pulse train for recording electrodes ipsilateral to the CI. As a result the stimulation artifacts could not be removed by artifact removal on the bases of linear interpolation for recording electrodes ipsilateral to the CI. However, artifact-free responses could be obtained in all subjects from recording electrodes contralateral to the CI, when subject specific reference electrodes (Cz or Fpz) were used. EASSRs to modulation frequencies within the 30-50 Hz range resulted in significant responses in all subjects. Only a small number of significant responses could be obtained, during a measurement period of 5 min, that originate from the brain stem (i.e., modulation frequencies in the 80-100 Hz range). This reduced synchronized activity of brain stem responses in long-term severely-hearing impaired CI users could be an attribute of processes associated with long-term hearing impairment and/or electrical stimulation. Copyright © 2016 Elsevier B.V. All rights reserved.
Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin
2015-01-01
The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954
Okhravi, Tooba; Tarvij Eslami, Saeedeh; Hushyar Ahmadi, Ali; Nassirian, Hossain; Najibpour, Reza
2015-02-01
Neonatal jaundice is a common cause of sensorneural hearing loss in children. We aimed to detect the neurotoxic effects of pathologic hyperbilirubinemia on brain stem and auditory tract by auditory brain stem evoked response (ABR) which could predict early effects of hyperbilirubinemia. This case-control study was performed on newborns with pathologic hyperbilirubinemia. The inclusion criteria were healthy term and near term (35 - 37 weeks) newborns with pathologic hyperbilirubinemia with serum bilirubin values of ≥ 7 mg/dL, ≥ 10 mg/dL and ≥14 mg/dL at the first, second and third-day of life, respectively, and with bilirubin concentration ≥ 18 mg/dL at over 72 hours of life. The exclusion criteria included family history and diseases causing sensorineural hearing loss, use of auto-toxic medications within the preceding five days, convulsion, congenital craniofacial anomalies, birth trauma, preterm newborns < 35 weeks old, birth weight < 1500 g, asphyxia, and mechanical ventilations for five days or more. A total of 48 newborns with hyperbilirubinemia met the enrolment criteria as the case group and 49 healthy newborns as the control group, who were hospitalized in a university educational hospital (22 Bahaman), in a north-eastern city of Iran, Mashhad. ABR was performed on both groups. The evaluated variable factors were latency time, inter peak intervals time, and loss of waves. The mean latencies of waves I, III and V of ABR were significantly higher in the pathologic hyperbilirubinemia group compared with the controls (P < 0.001). In addition, the mean interpeak intervals (IPI) of waves I-III, I-V and III-V of ABR were significantly higher in the pathologic hyperbilirubinemia group compared with the controls (P < 0.001). For example, the mean latencies time of wave I was significantly higher in right ear of the case group than in controls (2.16 ± 0.26 vs. 1.77 ± 0.15 milliseconds, respectively) (P < 0.001). Pathologic hyperbilirubinemia causes acute disorder on brain stem function; therefore, early diagnosis of neonatal jaundice for prevention of bilirubin neurotoxic effects is essential. As national neonatal hearing screening in not yet established in Iran, we recommend performing ABR for screening of bilirubin neurotoxicity in all cases with hyperbilirubinemia.
Poncelet, L C; Coppens, A G; Meuris, S I; Deltenre, P F
2000-11-01
To evaluate auditory maturation in puppies. Ten clinically normal Beagle puppies. Puppies were examined repeatedly from days 11 to 36 after birth (8 measurements). Click-evoked brain stem auditory-evoked potentials (BAEP) were obtained in response to rarefaction and condensation click stimuli from 90 dB normal hearing level to wave V threshold, using steps of 10 dB. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation differential potential (RCDP). Steps of 5 dB were used to determine thresholds of RCDP and wave V. Slope of the low-intensity segment of the wave V latency-intensity curve was calculated. The intensity range at which RCDP could not be recorded (ie, pre-RCDP range) was calculated by subtracting the threshold of wave V from threshold of RCDP RESULTS: Slope of the wave V latency-intensity curve low-intensity segment evolved with age, changing from (mean +/- SD) -90.8 +/- 41.6 to -27.8 +/- 4.1 micros/dB. Similar results were obtained from days 23 through 36. The pre-RCDP range diminished as puppies became older, decreasing from 40.0 +/- 7.5 to 20.5 +/- 6.4 dB. Changes in slope of the latency-intensity curve with age suggest enlargement of the audible range of frequencies toward high frequencies up to the third week after birth. Decrease in the pre-RCDP range may indicate an increase of the audible range of frequencies toward low frequencies. Age-related reference values will assist clinicians in detecting hearing loss in puppies.
Anatomy, Physiology and Function of the Auditory System
NASA Astrophysics Data System (ADS)
Kollmeier, Birger
The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.
Cochlear hearing loss in patients with Laron syndrome.
Attias, Joseph; Zarchi, Omer; Nageris, Ben I; Laron, Zvi
2012-02-01
The aim of this prospective clinical study was to test auditory function in patients with Laron syndrome, either untreated or treated with insulin-like growth factor I (IGF-I). The study group consisted of 11 patients with Laron syndrome: 5 untreated adults, 5 children and young adults treated with replacement IGF-I starting at bone age <2 years, and 1 adolescent who started replacement therapy at bone age 4.6 years. The auditory evaluation included pure tone and speech audiometry, tympanometry and acoustic reflexes, otoacoustic emissions, loudness dynamics, auditory brain stem responses and a hyperacusis questionnaire. All untreated patients and the patient who started treatment late had various degrees of sensorineural hearing loss and auditory hypersensitivity; acoustic middle ear reflexes were absent in most of them. All treated children had normal hearing and no auditory hypersensitivity; most had recordable middle ear acoustic reflexes. In conclusion, auditory defects seem to be associated with Laron syndrome and may be prevented by starting treatment with IGF-I at an early developmental age.
Brain-stem evoked potentials and noise effects in seagulls.
Counter, S A
1985-01-01
Brain-stem auditory evoked potentials (BAEP) recorded from the seagull were large-amplitude, short-latency, vertex-positive deflections which originate in the eighth nerve and several brain-stem nuclei. BAEP waveforms were similar in latency and configurations to that reported for certain other lower vertebrates and some mammals. BAEP recorded at several pure tone frequencies throughout the seagull's auditory spectrum showed an area of heightened auditory sensitivity between 1 and 3 kHz. This range was also found to be the primary bandwidth of the vocalization output of young seagulls. Masking by white noise and pure tones had remarkable effects on several parameters of the BAEP. In general, the tone- and click-induced BAEP were either reduced or obliterated by both pure tone and white noise maskers of specific signal to noise ratios and high intensity levels. The masking effects observed in this study may be related to the manner in which seagulls respond to intense environmental noise. One possible conclusion is that intense environmental noise, such as aircraft engine noise, may severely alter the seagull's localization apparatus and induce sonogenic stress, both of which could cause collisions with low-flying aircraft.
Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve
Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.
2015-01-01
The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538
Kraus, Thomas; Kiess, Olga; Hösl, Katharina; Terekhin, Pavel; Kornhuber, Johannes; Forster, Clemens
2013-09-01
It has recently been shown that electrical stimulation of sensory afferents within the outer auditory canal may facilitate a transcutaneous form of central nervous system stimulation. Functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) effects in limbic and temporal structures have been detected in two independent studies. In the present study, we investigated BOLD fMRI effects in response to transcutaneous electrical stimulation of two different zones in the left outer auditory canal. It is hypothesized that different central nervous system (CNS) activation patterns might help to localize and specifically stimulate auricular cutaneous vagal afferents. 16 healthy subjects aged between 20 and 37 years were divided into two groups. 8 subjects were stimulated in the anterior wall, the other 8 persons received transcutaneous vagus nervous stimulation (tVNS) at the posterior side of their left outer auditory canal. For sham control, both groups were also stimulated in an alternating manner on their corresponding ear lobe, which is generally known to be free of cutaneous vagal innervation. Functional MR data from the cortex and brain stem level were collected and a group analysis was performed. In most cortical areas, BOLD changes were in the opposite direction when comparing anterior vs. posterior stimulation of the left auditory canal. The only exception was in the insular cortex, where both stimulation types evoked positive BOLD changes. Prominent decreases of the BOLD signals were detected in the parahippocampal gyrus, posterior cingulate cortex and right thalamus (pulvinar) following anterior stimulation. In subcortical areas at brain stem level, a stronger BOLD decrease as compared with sham stimulation was found in the locus coeruleus and the solitary tract only during stimulation of the anterior part of the auditory canal. The results of the study are in line with previous fMRI studies showing robust BOLD signal decreases in limbic structures and the brain stem during electrical stimulation of the left anterior auditory canal. BOLD signal decreases in the area of the nuclei of the vagus nerve may indicate an effective stimulation of vagal afferences. In contrast, stimulation at the posterior wall seems to lead to unspecific changes of the BOLD signal within the solitary tract, which is a key relay station of vagal neurotransmission. The results of the study show promise for a specific novel method of cranial nerve stimulation and provide a basis for further developments and applications of non-invasive transcutaneous vagus stimulation in psychiatric patients. Copyright © 2013 Elsevier Inc. All rights reserved.
Ochi, A; Yasuhara, A; Kobayashi, Y
1998-11-01
This study compares the clinical usefulness of distortion product otoacoustic emissions (DPOAEs) with the auditory brain-stem response (ABR) for neonates in the neonatal intensive care unit for the evaluation of hearing impairment. Both DPOAEs and ABR were performed on 36 neonates (67 ears) on the same day. We defined neonates as having normal hearing when the thresholds of wave V of ABR were < or =45 dB hearing level. (1) We could not obtain DPOAEs at f2 = 977 Hz in neonates with normal hearing because of high noise floors. DPOAE recording time was 36 min shorter than that of ABR. (2) We defined as normal DPOAEs, the number of frequencies which showed the DPgram-noise floor > or =4 dB was > or =4 at 6 f2 frequencies, from 1416 Hz to 7959 Hz. (3) Normal thresholds of ABR and normal DPOAEs showed the same percentages, i.e. 68.7%, but the percentage of different results between ABR and DPOAEs was 6.0%. Our study indicates that DPOAEs represent a simple procedure, which can be easily performed in the NICU to obtain reliable results in high-risk neonates. Results obtained by DPOAEs were comparable to those obtained by the more complex procedure of ABR.
NASA Astrophysics Data System (ADS)
Belanger, Andrea J.; Higgs, Dennis M.
2005-04-01
The round goby (Neogobius melanostomus), is an invasive species in the Great Lakes watershed. Adult round gobies show behavioral responses to conspecific vocalizations but physiological investigations have not yet been conducted to quantify their hearing abilities. We have been examining the physiological and morphological development of the auditory system in the round goby. Various frequencies (100 Hz to 800 Hz and conspecific sounds), at various intensities (120 dB to 170 dB re 1 Pa) were presented to juveniles and adults and their auditory brain-stem responses (ABR) were recorded. Round gobies only respond physiologically to tones from 100-600 Hz, with threshold varying between 145 to 155 dB re 1 Pa. The response threshold to conspecific sounds was 140 dB re 1 Pa. There was no significant difference in auditory threshold between sizes of fish for either tones or conspecific sounds. Saccular epithelia were stained using phalloidin and there was a trend towards an increase in both hair cell number and density with an increase in fish size. These results represent a first attempt to quantify auditory abilities in this invasive species. This is an important step in understanding their reproductive physiology, which could potentially aid in their population control. [Funded by NSERC.
Forebrain pathway for auditory space processing in the barn owl.
Cohen, Y E; Miller, G L; Knudsen, E I
1998-02-01
The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.
Cerebral responses to local and global auditory novelty under general anesthesia
Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir
2017-01-01
Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046
Alvarez, Francisco Jose; Revuelta, Miren; Santaolalla, Francisco; Alvarez, Antonia; Lafuente, Hector; Arteaga, Olatz; Alonso-Alconada, Daniel; Sanchez-del-Rey, Ana; Hilario, Enrique; Martinez-Ibargüen, Agustin
2015-01-01
Hypoxia-ischemia (HI) is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets. Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs) of newborn piglets exposed to acute hypoxia/ischemia (n = 6) and a control group with no such exposure (n = 10). ABRs were recorded for both ears before the start of the experiment (baseline), after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury. Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant. The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.
Effect of iron-deficiency anemia on cognitive skills and neuromaturation in infancy and childhood.
Walter, Tomas
2003-12-01
Iron-deficiency anemia in infancy has been consistently shown to negatively influence performance in tests of psychomotor development. In most studies of short-term follow-up, lower scores did not improve with iron therapy, despite complete hematologic replenishment. The negative impact on psychomotor development of iron-deficiency anemia (IDA) in infancy has been well documented in more than a dozen studies during the last two decades. Two studies will be presented here to further support this assertion. Additionally, we will present some data referring to longer follow-up at 5 and 10 years as well as data concerning recent descriptions of the neurologic derangements that may underlie these behavioral effects. To evaluate whether these deficits may revert after long-term observation, a cohort of infants was re-evaluated at 5 and 10 years of age. Two studies have examined children aged 5 years who had anemia as infants using comparable tools of cognitive development showing persisting and consistent important disadvantages in those who were formerly anemic. These tests were better predictors of future achievement than psychomotor scores. These children were again examined at 10 years and showed lower school achievement and poorer fine-hand movements. Studies of neurologic maturation in a new cohort of infants aged 6 months included auditory brain stem responses and naptime 18-lead sleep studies. The central conduction time of the auditory brain stem responses was slower at 6, 12, and 18 months and at 4 years, despite iron therapy beginning at 6 months. During the sleep-wakefulness cycle, heart-rate variability--a developmental expression of the autonomic nervous system--was less mature in anemic infants. The proposed mechanisms are altered auditory-nerve and vagal-nerve myelination, respectively, as iron is required for normal myelin synthesis.
A Bayesian Account of Vocal Adaptation to Pitch-Shifted Auditory Feedback
Hahnloser, Richard H. R.
2017-01-01
Motor systems are highly adaptive. Both birds and humans compensate for synthetically induced shifts in the pitch (fundamental frequency) of auditory feedback stemming from their vocalizations. Pitch-shift compensation is partial in the sense that large shifts lead to smaller relative compensatory adjustments of vocal pitch than small shifts. Also, compensation is larger in subjects with high motor variability. To formulate a mechanistic description of these findings, we adapt a Bayesian model of error relevance. We assume that vocal-auditory feedback loops in the brain cope optimally with known sensory and motor variability. Based on measurements of motor variability, optimal compensatory responses in our model provide accurate fits to published experimental data. Optimal compensation correctly predicts sensory acuity, which has been estimated in psychophysical experiments as just-noticeable pitch differences. Our model extends the utility of Bayesian approaches to adaptive vocal behaviors. PMID:28135267
On wavelet analysis of auditory evoked potentials.
Bradley, A P; Wilson, W J
2004-05-01
To determine a preferred wavelet transform (WT) procedure for multi-resolution analysis (MRA) of auditory evoked potentials (AEP). A number of WT algorithms, mother wavelets, and pre-processing techniques were examined by way of critical theoretical discussion followed by experimental testing of key points using real and simulated auditory brain-stem response (ABR) waveforms. Conclusions from these examinations were then tested on a normative ABR dataset. The results of the various experiments are reported in detail. Optimal AEP WT MRA is most likely to occur when an over-sampled discrete wavelet transformation (DWT) is used, utilising a smooth (regularity >or=3) and symmetrical (linear phase) mother wavelet, and a reflection boundary extension policy. This study demonstrates the practical importance of, and explains how to minimize potential artefacts due to, 4 inter-related issues relevant to AEP WT MRA, namely shift variance, phase distortion, reconstruction smoothness, and boundary artefacts.
NASA Technical Reports Server (NTRS)
Hoffman, L. F.; Horowitz, J. M.
1984-01-01
The effect of decreasing of brain temperature on the brainstem auditory evoked response (BAER) in rats was investigated. Voltage pulses, applied to a piezoelectric crystal attached to the skull, were used to evoke stimuli in the auditory system by means of bone-conducted vibrations. The responses were recorded at 37 C and 34 C brain temperatures. The peaks of the BAER recorded at 34 C were delayed in comparison with the peaks from the 37 C wave, and the later peaks were more delayed than the earlier peaks. These results indicate that an increase in the interpeak latency occurs as the brain temperature is decreased. Preliminary experiments, in which responses to brief angular acceleration were used to measure the brainstem vestibular evoked response (BVER), have also indicated increases in the interpeak latency in response to the lowering of brain temperature.
A nonlinear filter-bank model of the guinea-pig cochlear nerve: Rate responses
NASA Astrophysics Data System (ADS)
Sumner, Christian J.; O'Mard, Lowel P.; Lopez-Poveda, Enrique A.; Meddis, Ray
2003-06-01
The aim of this study is to produce a functional model of the auditory nerve (AN) response of the guinea-pig that reproduces a wide range of important responses to auditory stimulation. The model is intended for use as an input to larger scale models of auditory processing in the brain-stem. A dual-resonance nonlinear filter architecture is used to reproduce the mechanical tuning of the cochlea. Transduction to the activity on the AN is accomplished with a recently proposed model of the inner-hair-cell. Together, these models have been shown to be able to reproduce the response of high-, medium-, and low-spontaneous rate fibers from the guinea-pig AN at high best frequencies (BFs). In this study we generate parameters that allow us to fit the AN model to data from a wide range of BFs. By varying the characteristics of the mechanical filtering as a function of the BF it was possible to reproduce the BF dependence of frequency-threshold tuning curves, AN rate-intensity functions at and away from BF, compression of the basilar membrane at BF as inferred from AN responses, and AN iso-intensity functions. The model is a convenient computational tool for the simulation of the range of nonlinear tuning and rate-responses found across the length of the guinea-pig cochlear nerve.
Farahani, Ehsan Darestani; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid
2017-03-01
Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhai, S-Q; Guo, W; Hu, Y-Y; Yu, N; Chen, Q; Wang, J-Z; Fan, M; Yang, W-Y
2011-05-01
To explore the protective effects of brain-derived neurotrophic factor on the noise-damaged cochlear spiral ganglion. Recombinant adenovirus brain-derived neurotrophic factor vector, recombinant adenovirus LacZ and artificial perilymph were prepared. Guinea pigs with audiometric auditory brainstem response thresholds of more than 75 dB SPL, measured seven days after four hours of noise exposure at 135 dB SPL, were divided into three groups. Adenovirus brain-derived neurotrophic factor vector, adenovirus LacZ and perilymph were infused into the cochleae of the three groups, variously. Eight weeks later, the cochleae were stained immunohistochemically and the spiral ganglion cells counted. The auditory brainstem response threshold recorded before and seven days after noise exposure did not differ significantly between the three groups. However, eight weeks after cochlear perfusion, the group receiving brain-derived neurotrophic factor had a significantly decreased auditory brainstem response threshold and increased spiral ganglion cell count, compared with the adenovirus LacZ and perilymph groups. When administered via cochlear infusion following noise damage, brain-derived neurotrophic factor appears to improve the auditory threshold, and to have a protective effect on the spiral ganglion cells.
High lead exposure and auditory sensory-neural function in Andean children.
Counter, S A; Vahter, M; Laurell, G; Buchanan, L H; Ortega, F; Skerfving, S
1997-01-01
We investigated blood lead (B-Pb) and mercury (B-Hg) levels and auditory sensory-neural function in 62 Andean school children living in a Pb-contaminated area of Ecuador and 14 children in a neighboring gold mining area with no known Pb exposure. The median B-Pb level for 62 children in the Pb-exposed group was 52.6 micrograms/dl (range 9.9-110.0 micrograms/dl) compared with 6.4 micrograms/dl (range 3.9-12.0 micrograms/dl) for the children in the non-Pb exposed group; the differences were statistically significant (p < 0.001). Auditory thresholds for the Pb-exposed group were normal at the pure tone frequencies of 0.25-8 kHz over the entire range of B-Pb levels, Auditory brain stem response tests in seven children with high B-Pb levels showed normal absolute peak and interpeak latencies. The median B-Hg levels were 0.16 micrograms/dl (range 0.04-0.58 micrograms/dl) for children in the Pb-exposed group and 0.22 micrograms/dl (range 0.1-0.44 micrograms/dl) for children in the non-Pb exposed gold mining area, and showed no significant relationship to auditory function. Images Figure 1. Figure 3. A Figure 3. B PMID:9222138
Electrophysiological measurement of human auditory function
NASA Technical Reports Server (NTRS)
Galambos, R.
1975-01-01
Knowledge of the human auditory evoked response is reviewed, including methods of determining this response, the way particular changes in the stimulus are coupled to specific changes in the response, and how the state of mind of the listener will influence the response. Important practical applications of this basic knowledge are discussed. Measurement of the brainstem evoked response, for instance, can state unequivocally how well the peripheral auditory apparatus functions. It might then be developed into a useful hearing test, especially for infants and preverbal or nonverbal children. Clinical applications of measuring the brain waves evoked 100 msec and later after the auditory stimulus are undetermined. These waves are clearly related to brain events associated with cognitive processing of acoustic signals, since their properties depend upon where the listener directs his attention and whether how long he expects the signal.
Baltus, Alina; Herrmann, Christoph Siegfried
2016-06-01
Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Complications of pediatric auditory brain stem implantation via retrosigmoid approach.
Bayazit, Yildirim A; Abaday, Ayça; Dogulu, Fikret; Göksu, Nebil
2011-01-01
We aimed to present the complications of auditory brain stem implantations (ABI) in pediatric patients which were performed via retrosigmoid approach. Between March 2007 and February 2010, five prelingually deaf children underwent ABI (Medel device) operation via retrosigmoid approach. All children had severe cochlear malformations. The ages ranged from 20 months to 5 years. The perioperative complications encountered in 2 patients were evaluated retrospectively. No intraoperative complication was observed in the patients. Cerebrospinal fluid (CSF) leakage was the most common postoperative complication that was seen in 2 patients. The CSF leak triggered a cascade of comorbidities, and elongated the hospitalization. Pediatric ABI surgery can lead to morbidity. The CSF leak is the most common complication encountered in retrosigmoid approach. The other complications usually result from long-term hospital stay during treatment period of the CSF leak. Therefore, every attempt must be made to prevent occurrence of CSF leaks in pediatric ABI operations. Copyright © 2011 S. Karger AG, Basel.
Functional Brain Activation in Response to a Clinical Vestibular Test Correlates with Balance
Noohi, Fatemeh; Kinnaird, Catherine; DeDios, Yiri; Kofman, Igor S.; Wood, Scott; Bloomberg, Jacob; Mulavara, Ajitkumar; Seidler, Rachael
2017-01-01
The current study characterizes brain fMRI activation in response to two modes of vestibular stimulation: Skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit either a vestibulo-spinal reflex [saccular-mediated colic Vestibular Evoked Myogenic Potentials (cVEMP)], or an ocular muscle response [utricle-mediated ocular VEMP (oVEMP)]. Research suggests that the skull tap elicits both saccular and utricle-mediated VEMPs, while being faster and less irritating for subjects than the high decibel tones required to elicit VEMPs. However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of brain activity. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation. Here we hypothesized that pneumatically powered skull taps would elicit a similar pattern of brain activity as shown in previous studies. Our results provide the first evidence of using pneumatically powered skull taps to elicit vestibular activity inside the MRI scanner. A conjunction analysis revealed that skull taps elicit overlapping activation with auditory tone bursts in the canonical vestibular cortical regions. Further, our postural control assessments revealed that greater amplitude of brain activation in response to vestibular stimulation was associated with better balance control for both techniques. Additionally, we found that skull taps elicit more robust vestibular activity compared to auditory tone bursts, with less reported aversive effects, highlighting the utility of this approach for future clinical and basic science research. PMID:28344549
Left and right reaction time differences to the sound intensity in normal and AD/HD children.
Baghdadi, Golnaz; Towhidkhah, Farzad; Rostami, Reza
2017-06-01
Right hemisphere, which is attributed to the sound intensity discrimination, has abnormality in people with attention deficit/hyperactivity disorder (AD/HD). However, it is not studied whether the defect in the right hemisphere has influenced on the intensity sensation of AD/HD subjects or not. In this study, the sensitivity of normal and AD/HD children to the sound intensity was investigated. Nineteen normal and fourteen AD/HD children participated in the study and performed a simple auditory reaction time task. Using the regression analysis, the sensitivity of right and left ears to various sound intensity levels was examined. The statistical results showed that the sensitivity of AD/HD subjects to the intensity was lower than the normal group (p < 0.0001). Left and right pathways of the auditory system had the same pattern of response in AD/HD subjects (p > 0.05). However, in control group the left pathway was more sensitive to the sound intensity level than the right one (p = 0.0156). It can be probable that the deficit of the right hemisphere has influenced on the auditory sensitivity of AD/HD children. The possible existent deficits of other auditory system components such as middle ear, inner ear, or involved brain stem nucleuses may also lead to the observed results. The development of new biomarkers based on the sensitivity of the brain hemispheres to the sound intensity has been suggested to estimate the risk of AD/HD. Designing new technique to correct the auditory feedback has been also proposed in behavioral treatment sessions. Copyright © 2017. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sills, Robert C.; Harry, G. Jean; Valentine, William M.
2005-09-01
Inhalation studies were conducted on the hazardous air pollutants, carbon disulfide, which targets the central nervous system (spinal cord) and peripheral nervous system (distal portions of long myelinated axons), and carbonyl sulfide, which targets the central nervous system (brain). The objectives were to investigate the neurotoxicity of these compounds by a comprehensive evaluation of function, structure, and mechanisms of disease. Through interdisciplinary research, the major finding in the carbon disulfide inhalation studies was that carbon disulfide produced intra- and intermolecular protein cross-linking in vivo. The observation of dose-dependent covalent cross-linking in neurofilament proteins prior to the onset of lesions ismore » consistent with this process contributing to the development of the neurofilamentous axonal swellings characteristic of carbon disulfide neurotoxicity. Of significance is that valine-lysine thiourea cross-linking on rat globin and lysine-lysine thiourea cross-linking on erythrocyte spectrin reflect cross-linking events occurring within the axon and could potentially serve as biomarkers of carbon disulfide exposure and effect. In the carbonyl sulfide studies, using magnetic resonance microscopy (MRM), we determined that carbonyl sulfide targets the auditory pathway in the brain. MRM allowed the examination of 200 brain slices and made it possible to identify the most vulnerable sites of neurotoxicity, which would have been missed in our traditional neuropathology evaluations. Electrophysiological studies were focused on the auditory system and demonstrated decreases in auditory brain stem evoked responses. Similarly, mechanistic studies focused on evaluating cytochrome oxidase activity in the posterior colliculus and parietal cortex. A decrease in cytochrome oxidase activity was considered to be a contributing factor to the pathogenesis of carbonyl sulfide neurotoxicity.« less
The role of RIP3 mediated necroptosis in ouabain-induced spiral ganglion neurons injuries.
Wang, Xi; Wang, Ye; Ding, Zhong-jia; Yue, Bo; Zhang, Peng-zhi; Chen, Xiao-dong; Chen, Xin; Chen, Jun; Chen, Fu-quan; Chen, Yang; Wang, Ren-feng; Mi, Wen-juan; Lin, Ying; Wang, Jie; Qiu, Jian-hua
2014-08-22
Spiral ganglion neuron (SGN) injury is a generally accepted precursor of auditory neuropathy. Receptor-interacting protein 3 (RIP3) has been reported as an important necroptosis pathway mediator that can be blocked by necrostatin-1 (Nec-1). In our study, we sought to identify whether necroptosis participated in SGN injury. Ouabain was applied to establish an SGN injury model. We measured the auditory brain-stem response (ABR) threshold shift as an indicator of the auditory conditions. Positive β3-tubulin immunofluorescence staining indicated the surviving SGNs. RIP3 expression was evaluated using immunofluorescence, quantitative real-time polymerase chain reaction and western blot. SGN injury promoted an increase in RIP3 expression that could be suppressed by application of the necroptosis inhibitor Nec-1. A decreased ABR threshold shift and increased SGN density were observed when Nec-1 was administered with apoptosis inhibitor N-benzyloxycarbonyl-Val-Ala-Asp-fluoromethylketone (Z-VAD). These results demonstrated that necroptosis is an indispensable pathway separately from apoptosis leading to SGN death pathway, in which RIP3 plays an important role. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Burkard, R.; Jones, S.; Jones, T.
1994-01-01
Rate-dependent changes in the chick brain-stem auditory evoked response (BAER) using conventional averaging and a cross-correlation technique were investigated. Five 15- to 19-day-old white leghorn chicks were anesthetized with Chloropent. In each chick, the left ear was acoustically stimulated. Electrical pulses of 0.1-ms duration were shaped, attenuated, and passed through a current driver to an Etymotic ER-2 which was sealed in the ear canal. Electrical activity from stainless-steel electrodes was amplified, filtered (300-3000 Hz) and digitized at 20 kHz. Click levels included 70 and 90 dB peSPL. In each animal, conventional BAERs were obtained at rates ranging from 5 to 90 Hz. BAERs were also obtained using a cross-correlation technique involving pseudorandom pulse sequences called maximum length sequences (MLSs). The minimum time between pulses, called the minimum pulse interval (MPI), ranged from 0.5 to 6 ms. Two BAERs were obtained for each condition. Dependent variables included the latency and amplitude of the cochlear microphonic (CM), wave 2 and wave 3. BAERs were observed in all chicks, for all level by rate combinations for both conventional and MLS BAERs. There was no effect of click level or rate on the latency of the CM. The latency of waves 2 and 3 increased with decreasing click level and increasing rate. CM amplitude decreased with decreasing click level, but was not influenced by click rate for the 70 dB peSPL condition. For the 90 dB peSPL click, CM amplitude was uninfluenced by click rate for conventional averaging. For MLS BAERs, CM amplitude was similar to conventional averaging for longer MPIs.(ABSTRACT TRUNCATED AT 250 WORDS).
Brain stem auditory-evoked response of the nonanesthetized dog.
Marshall, A E
1985-04-01
The brain stem auditory evoked-response was measured from a group of 24 healthy dogs under conditions suitable for clinical diagnostic use. The waveforms were identified, and analysis of amplitude ratios, latencies, and interpeak latencies were done. The group was subdivided into subgroups based on tranquilization, nontranquilization, sex, and weight. Differences were not observed among any of these subgroups. All dogs responded to the click stimulus from 30 dB to 90 dB, but only 62.5% of the dogs responded at 5 dB. The total number of peaks averaged 1.6 at 5 dB, increased linearly to 6.5 at 50 dB, and remained at 6.5 to 90 dB. Frequency of recognizability of each wave was tabulated for each stimulus intensity tested; recognizability increased with increased stimulus intensity. Amplitudes of waves increased with increasing stimulus intensity, but were highly variable. The 4th wave had the greatest amplitude at the lower stimulus intensities, and the 1st wave had the greatest amplitude at the higher stimulus intensities. Amplitude ratio of the 1st to 5th wave was greater than 1 at less than or equal to 50 dB stimulus intensity, and was 1 for stimulus intensities greater than 50 dB. Interpeak latencies did not change relative to stimulus intensities. Peak latencies of each wave averaged at 5-dB hearing level for the 1st to 6th waves were 2.03, 2.72, 3.23, 4.14, 4.41, and 6.05 ms, respectively; latencies of these 6 waves at 90 dB were 0.92, 1.79, 2.46, 3.03, 3.47, and 4.86 ms, respectively. Latency decreased between 0.009 to 0.014 ms/dB for the waves.
Visual attention modulates brain activation to angry voices.
Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas
2011-06-29
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.
Sieratzki, J S; Calvert, G A; Brammer, M; David, A; Woll, B
2001-06-01
Landau-Kleffner syndrome (LKS) is an acquired aphasia which begins in childhood and is thought to arise from an epileptic disorder within the auditory speech cortex. Although the epilepsy usually subsides at puberty, a severe communication impairment often persists. Here we report on a detailed study of a 26-year old, left-handed male, with onset of LKS at age 5 years, who is aphasic for English but who learned British Sign Language (BSL) at age 13. We have investigated his skills in different language modalities, recorded EEGs during wakefulness, sleep, and under conditions of auditory stimulation, measured brain stem auditory-evoked potentials (BAEP), and performed functional MRI (fMRI) during a range of linguistic tasks. Our investigation demonstrated severe restrictions in comprehension and production of spoken English as well as lip-reading, while reading was comparatively less impaired. BSL was by far the most efficient mode of communication. All EEG recordings were normal, while BAEP showed minor abnormalities. fMRI revealed: 1) powerful and extensive bilateral (R > L) activation of auditory cortices in response to heard speech, much stronger than when listening to music; 2) very little response to silent lip-reading; 3) strong activation in the temporo-parieto-occipital association cortex, exclusively in the right hemisphere (RH), when viewing BSL signs. Analysis of these findings provides novel insights into the disturbance of the auditory speech cortex which underlies LKS and its diagnostic evaluation by fMRI, and underpins a strategy of restoring communication abilities in LKS through a natural sign language of the deaf (with Video)
Strait, Dana L.; Kraus, Nina
2011-01-01
Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636
Patrick, Peter D; Mabry, Jennifer L; Gurka, Matthew J; Buck, Marcia L; Boatwright, Evelyn; Blackman, James A
2007-01-01
To explore the relationship between location and pattern of brain injury identified on MRI and prolonged low response state in children post-traumatic brain injury (TBI). This observational study compared 15 children who spontaneously recovered within 30 days post-TBI to 17 who remained in a prolonged low response state. 92.9% of children with brain stem injury were in the low response group. The predicted probability was 0.81 for brain stem injury alone, increasing to 0.95 with a regional pattern of injury to the brain stem, basal ganglia, and thalamus. Low response state in children post-TBI is strongly correlated with two distinctive regions of injury: the brain stem alone, and an injury pattern to the brain stem, basal ganglia, and thalamus. This study demonstrates the need for large-scale clinical studies using MRI as a tool for outcome assessment in children and adolescents following severe TBI.
[Perceptive deafness and AIDS].
Sancho, E M; Domínguez, L; Urpegui, A; Martínez, J; Jiménez, M; Bretos, S; Vallés, H
1997-06-01
We report a case of a 23 years old woman HIV positive for the past five years with a four year history of right perceptive hypoacusia evolution without tinitus, vertigo or any other otologic symptomatology. After reviewing her personal and family history and conducting imilar tonal audiometry, tympanometry bilateral, contralateral estapedial reflex, auditory evoked brain stem response and a bilateral nasal fiberendoscopy, we analyzed the evolution of her immunal deficiency and the treatments to which she has been submitted with the purpose of determining the risk factors that have coincided in this case to be able to establish some criteria to follow the auditive affect in HIV positive patients.
Audio-tactile integration and the influence of musical training.
Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo
2014-01-01
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.
Neuronal chronometry of target detection: fusion of hemodynamic and event-related potential data.
Calhoun, V D; Adali, T; Pearlson, G D; Kiehl, K A
2006-04-01
Event-related potential (ERP) studies of the brain's response to infrequent, target (oddball) stimuli elicit a sequence of physiological events, the most prominent and well studied being a complex, the P300 (or P3) peaking approximately 300 ms post-stimulus for simple stimuli and slightly later for more complex stimuli. Localization of the neural generators of the human oddball response remains challenging due to the lack of a single imaging technique with good spatial and temporal resolution. Here, we use independent component analyses to fuse ERP and fMRI modalities in order to examine the dynamics of the auditory oddball response with high spatiotemporal resolution across the entire brain. Initial activations in auditory and motor planning regions are followed by auditory association cortex and motor execution regions. The P3 response is associated with brainstem, temporal lobe, and medial frontal activity and finally a late temporal lobe "evaluative" response. We show that fusing imaging modalities with different advantages can provide new information about the brain.
Phillips, Derrick J; Schei, Jennifer L; Meighan, Peter C; Rector, David M
2011-11-01
Auditory evoked potential (AEP) components correspond to sequential activation of brain structures within the auditory pathway and reveal neural activity during sensory processing. To investigate state-dependent modulation of stimulus intensity response profiles within different brain structures, we assessed AEP components across both stimulus intensity and state. We implanted adult female Sprague-Dawley rats (N = 6) with electrodes to measure EEG, EKG, and EMG. Intermittent auditory stimuli (6-12 s) varying from 50 to 75 dBa were delivered over a 24-h period. Data were parsed into 2-s epochs and scored for wake/sleep state. All AEP components increased in amplitude with increased stimulus intensity during wake. During quiet sleep, however, only the early latency response (ELR) showed this relationship, while the middle latency response (MLR) increased at the highest 75 dBa intensity, and the late latency response (LLR) showed no significant change across the stimulus intensities tested. During rapid eye movement sleep (REM), both ELR and LLR increased, similar to wake, but MLR was severely attenuated. Stimulation intensity and the corresponding AEP response profile were dependent on both brain structure and sleep state. Lower brain structures maintained stimulus intensity and neural response relationships during sleep. This relationship was not observed in the cortex, implying state-dependent modification of stimulus intensity coding. Since AEP amplitude is not modulated by stimulus intensity during sleep, differences between paired 75/50 dBa stimuli could be used to determine state better than individual intensities.
Yoder, Kathleen M.; Vicario, David S.
2012-01-01
Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1. Local estradiol action within an auditory area is necessary for socially-relevant sounds to induce normal physiological responses in the brains of both sexes; 2. These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3. Estradiol action within the auditory forebrain enables behavioral discrimination among socially-relevant sounds in males; and 4. Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. Keywords: Estrogens, Songbird, Social Context, Auditory Perception PMID:22201281
Neurotrophic factor intervention restores auditory function in deafened animals
NASA Astrophysics Data System (ADS)
Shinohara, Takayuki; Bredberg, Göran; Ulfendahl, Mats; Pyykkö, Ilmari; Petri Olivius, N.; Kaksonen, Risto; Lindström, Bo; Altschuler, Richard; Miller, Josef M.
2002-02-01
A primary cause of deafness is damage of receptor cells in the inner ear. Clinically, it has been demonstrated that effective functionality can be provided by electrical stimulation of the auditory nerve, thus bypassing damaged receptor cells. However, subsequent to sensory cell loss there is a secondary degeneration of the afferent nerve fibers, resulting in reduced effectiveness of such cochlear prostheses. The effects of neurotrophic factors were tested in a guinea pig cochlear prosthesis model. After chemical deafening to mimic the clinical situation, the neurotrophic factors brain-derived neurotrophic factor and an analogue of ciliary neurotrophic factor were infused directly into the cochlea of the inner ear for 26 days by using an osmotic pump system. An electrode introduced into the cochlea was used to elicit auditory responses just as in patients implanted with cochlear prostheses. Intervention with brain-derived neurotrophic factor and the ciliary neurotrophic factor analogue not only increased the survival of auditory spiral ganglion neurons, but significantly enhanced the functional responsiveness of the auditory system as measured by using electrically evoked auditory brainstem responses. This demonstration that neurotrophin intervention enhances threshold sensitivity within the auditory system will have great clinical importance for the treatment of deaf patients with cochlear prostheses. The findings have direct implications for the enhancement of responsiveness in deafferented peripheral nerves.
Brain stem auditory-evoked response in the nonanesthetized horse and pony.
Marshall, A E
1985-07-01
The brain stem auditory-evoked response (BAER) was measured in 10 horses and 7 ponies under conditions suitable for clinical diagnostic testing. Latencies of 5 vertex-positive peaks and interpeak latency and amplitude ratio on the 1st and 4th peaks were determined. Data from horses and ponies were analyzed separately and were compared. The stimulus was a click (n = 3,000) ranging from 10- to 90-dB hearing level (HL). Neither horses nor ponies responded with a BAER at 10 dB nor did they give reliable responses at less than 50 dB. The 2nd of the BAER waves appeared in the record at lower stimulus intensities than did the 1st wave for the horse and pony. Horses and ponies had a decreasing latency for all waves, as a result of increasing stimulus intensity. Latencies were shorter for the ponies than for the horses at all stimulus intensities for the 1st, 2nd, 3rd, and 4th waves, but not the 5th wave. At 60-dB HL, the mean latencies for the 1st through 5th wave, respectively, for the horse were 1.73, 3.08, 3.93, 4.98, and 6.00 ms and for the pony 1.48, 2.73, 3.50, 4.56, and 6.58 ms. Interpeak latencies, 1st to 4th wave, averaged 3.22 ms (horse) and 3.11 ms (pony) for all stimulus intensities from 50- to 90-dB HL and had a tendency to decrease slightly as stimulus intensity increased. Amplitude ratios (4th wave/1st wave) were less than 1 for all stimulus intensities in the horse. In the pony, the ratio was less than 1 at greater than or equal to 70-dB HL and greater than 1 at less than or equal to 60-dB HL.
Pitch-Responsive Cortical Regions in Congenital Amusia.
Norman-Haignere, Sam V; Albouy, Philippe; Caclin, Anne; McDermott, Josh H; Kanwisher, Nancy G; Tillmann, Barbara
2016-03-09
Congenital amusia is a lifelong deficit in music perception thought to reflect an underlying impairment in the perception and memory of pitch. The neural basis of amusic impairments is actively debated. Some prior studies have suggested that amusia stems from impaired connectivity between auditory and frontal cortex. However, it remains possible that impairments in pitch coding within auditory cortex also contribute to the disorder, in part because prior studies have not measured responses from the cortical regions most implicated in pitch perception in normal individuals. We addressed this question by measuring fMRI responses in 11 subjects with amusia and 11 age- and education-matched controls to a stimulus contrast that reliably identifies pitch-responsive regions in normal individuals: harmonic tones versus frequency-matched noise. Our findings demonstrate that amusic individuals with a substantial pitch perception deficit exhibit clusters of pitch-responsive voxels that are comparable in extent, selectivity, and anatomical location to those of control participants. We discuss possible explanations for why amusics might be impaired at perceiving pitch relations despite exhibiting normal fMRI responses to pitch in their auditory cortex: (1) individual neurons within the pitch-responsive region might exhibit abnormal tuning or temporal coding not detectable with fMRI, (2) anatomical tracts that link pitch-responsive regions to other brain areas (e.g., frontal cortex) might be altered, and (3) cortical regions outside of pitch-responsive cortex might be abnormal. The ability to identify pitch-responsive regions in individual amusic subjects will make it possible to ask more precise questions about their role in amusia in future work. Copyright © 2016 the authors 0270-6474/16/362986-09$15.00/0.
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
2016-01-01
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
2016-01-01
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.
Nanofibrous scaffolds for the guidance of stem cell-derived neurons for auditory nerve regeneration.
Hackelberg, Sandra; Tuck, Samuel J; He, Long; Rastogi, Arjun; White, Christina; Liu, Liqian; Prieskorn, Diane M; Miller, Ryan J; Chan, Che; Loomis, Benjamin R; Corey, Joseph M; Miller, Josef M; Duncan, R Keith
2017-01-01
Impairment of spiral ganglion neurons (SGNs) of the auditory nerve is a major cause for hearing loss occurring independently or in addition to sensory hair cell damage. Unfortunately, mammalian SGNs lack the potential for autonomous regeneration. Stem cell based therapy is a promising approach for auditory nerve regeneration, but proper integration of exogenous cells into the auditory circuit remains a fundamental challenge. Here, we present novel nanofibrous scaffolds designed to guide the integration of human stem cell-derived neurons in the internal auditory meatus (IAM), the foramen allowing passage of the spiral ganglion to the auditory brainstem. Human embryonic stem cells (hESC) were differentiated into neural precursor cells (NPCs) and seeded onto aligned nanofiber mats. The NPCs terminally differentiated into glutamatergic neurons with high efficiency, and neurite projections aligned with nanofibers in vitro. Scaffolds were assembled by seeding GFP-labeled NPCs on nanofibers integrated in a polymer sheath. Biocompatibility and functionality of the NPC-seeded scaffolds were evaluated in vivo in deafened guinea pigs (Cavia porcellus). To this end, we established an ouabain-based deafening procedure that depleted an average 72% of SGNs from apex to base of the cochleae and caused profound hearing loss. Further, we developed a surgical procedure to implant seeded scaffolds directly into the guinea pig IAM. No evidence of an inflammatory response was observed, but post-surgery tissue repair appeared to be facilitated by infiltrating Schwann cells. While NPC survival was found to be poor, both subjects implanted with NPC-seeded and cell-free control scaffolds showed partial recovery of electrically-evoked auditory brainstem thresholds. Thus, while future studies must address cell survival, nanofibrous scaffolds pose a promising strategy for auditory nerve regeneration.
Lew, Henry L; Lee, Eun Ha; Miyoshi, Yasushi; Chang, Douglas G; Date, Elaine S; Jerger, James F
2004-03-01
Because of the violent nature of traumatic brain injury, traumatic brain injury patients are susceptible to various types of trauma involving the auditory system. We report a case of a 55-yr-old man who presented with communication problems after traumatic brain injury. Initial results from behavioral audiometry and Weber/Rinne tests were not reliable because of poor cooperation. He was transferred to our service for inpatient rehabilitation, where review of the initial head computed tomographic scan showed only left temporal bone fracture. Brainstem auditory-evoked potential was then performed to evaluate his hearing function. The results showed bilateral absence of auditory-evoked responses, which strongly suggested bilateral deafness. This finding led to a follow-up computed tomographic scan, with focus on bilateral temporal bones. A subtle transverse fracture of the right temporal bone was then detected, in addition to the left temporal bone fracture previously identified. Like children with hearing impairment, traumatic brain injury patients may not be able to verbalize their auditory deficits in a timely manner. If hearing loss is suspected in a patient who is unable to participate in traditional behavioral audiometric testing, brainstem auditory-evoked potential may be an option for evaluating hearing dysfunction.
Boahen, Kwabena
2013-01-01
A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436
Spiral Ganglion Stem Cells Can Be Propagated and Differentiated Into Neurons and Glia
Zecha, Veronika; Wagenblast, Jens; Arnhold, Stefan; Edge, Albert S. B.; Stöver, Timo
2014-01-01
Abstract The spiral ganglion is an essential functional component of the peripheral auditory system. Most types of hearing loss are associated with spiral ganglion cell degeneration which is irreversible due to the inner ear's lack of regenerative capacity. Recent studies revealed the existence of stem cells in the postnatal spiral ganglion, which gives rise to the hope that these cells might be useful for regenerative inner ear therapies. Here, we provide an in-depth analysis of sphere-forming stem cells isolated from the spiral ganglion of postnatal mice. We show that spiral ganglion spheres have characteristics similar to neurospheres isolated from the brain. Importantly, spiral ganglion sphere cells maintain their major stem cell characteristics after repeated propagation, which enables the culture of spheres for an extended period of time. In this work, we also demonstrate that differentiated sphere-derived cell populations not only adopt the immunophenotype of mature spiral ganglion cells but also develop distinct ultrastructural features of neurons and glial cells. Thus, our work provides further evidence that self-renewing spiral ganglion stem cells might serve as a promising source for the regeneration of lost auditory neurons. PMID:24940560
Infants’ brain responses to speech suggest Analysis by Synthesis
Kuhl, Patricia K.; Ramírez, Rey R.; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki
2014-01-01
Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners’ knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca’s area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of “motherese” on early language learning, and (iii) the “social-gating” hypothesis and humans’ development of social understanding. PMID:25024207
Infants' brain responses to speech suggest analysis by synthesis.
Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki
2014-08-05
Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.
Discrimination of timbre in early auditory responses of the human brain.
Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee
2011-01-01
The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Inhibition of caspases alleviates gentamicin-induced cochlear damage in guinea pigs.
Okuda, Takeshi; Sugahara, Kazuma; Takemoto, Tsuyoshi; Shimogori, Hiroaki; Yamashita, Hiroshi
2005-03-01
The efficacy of caspase inhibitors for protecting the cochlea was evaluated in an in vivo study using guinea pigs, as the animal model system. Gentamicin (12 mg/ml) was delivered via an osmotic pump into the cochlear perilymphatic space of guinea pigs at 0.5 microl/h for 14 days. Additional animals were given either z-Val-Ala-Asp (Ome)-fluoromethyl ketone (z-VAD-FMK) or z-Leu-Glu-His-Asp-FMK (z-LEHD-FMK), a general caspase inhibitor and a caspase 9 inhibitor, respectively, in addition to gentamicin. The elevation in auditory brain stem response thresholds, at 4, 7, and 14 days following gentamicin administration, were decreased in animals that received both z-VAD-FMK and z-LEHD-FMK. Cochlear sensory hair cells survived in greater numbers in animals that received caspase inhibitors in addition to gentamicin, whereas sensory hair cells in animals that received gentamicin only were severely damaged. These results suggest that auditory cell death induced by gentamicin is closely related to the activation of caspases in vivo.
Positron Emission Tomography in Cochlear Implant and Auditory Brainstem Implant Recipients.
ERIC Educational Resources Information Center
Miyamoto, Richard T.; Wong, Donald
2001-01-01
Positron emission tomography imaging was used to evaluate the brain's response to auditory stimulation, including speech, in deaf adults (five with cochlear implants and one with an auditory brainstem implant). Functional speech processing was associated with activation in areas classically associated with speech processing. (Contains five…
Disruption of hierarchical predictive coding during sleep
Strauss, Melanie; Sitt, Jacobo D.; King, Jean-Remi; Elbaz, Maxime; Azizi, Leila; Buiatti, Marco; Naccache, Lionel; van Wassenhove, Virginie; Dehaene, Stanislas
2015-01-01
When presented with an auditory sequence, the brain acts as a predictive-coding device that extracts regularities in the transition probabilities between sounds and detects unexpected deviations from these regularities. Does such prediction require conscious vigilance, or does it continue to unfold automatically in the sleeping brain? The mismatch negativity and P300 components of the auditory event-related potential, reflecting two steps of auditory novelty detection, have been inconsistently observed in the various sleep stages. To clarify whether these steps remain during sleep, we recorded simultaneous electroencephalographic and magnetoencephalographic signals during wakefulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including short-term (local) and long-term (global) regularities. The global response, reflected in the P300, vanished during sleep, in line with the hypothesis that it is a correlate of high-level conscious error detection. The local mismatch response remained across all sleep stages (N1, N2, and REM sleep), but with an incomplete structure; compared with wakefulness, a specific peak reflecting prediction error vanished during sleep. Those results indicate that sleep leaves initial auditory processing and passive sensory response adaptation intact, but specifically disrupts both short-term and long-term auditory predictive coding. PMID:25737555
Evoked potentials in multiple sclerosis.
Kraft, George H
2013-11-01
Before the development of magnetic resonance imaging (MRI), evoked potentials (EPs)-visual evoked potentials, somatosensory evoked potentials, and brain stem auditory evoked responses-were commonly used to determine a second site of disease in patients being evaluated for possible multiple sclerosis (MS). The identification of an area of the central nervous system showing abnormal conduction was used to supplement the abnormal signs identified on the physical examination-thus identifying the "multiple" in MS. This article is a brief overview of additional ways in which central nervous system (CNS) physiology-as measured by EPs-can still contribute value in the management of MS in the era of MRIs. Copyright © 2013 Elsevier Inc. All rights reserved.
[Development of auditory evoked potentials of the brainstem in relation to age].
Tarantino, V; Stura, M; Vallarino, R
1988-01-01
In order to study the various changes which occur in the waveform, latency and amplitude of the auditory brainstem evoked response (BSER) as a function of age, the authors recorded the BSER from the scalp's surface of 20 newborns and 50 infants, 3 months, 6 months, 1 year and 3 years old as well as from 20 normal adults. The data obtained show that the most reliable waves during the first month of life are waves I, III, V, which is often present even when other vertex-positive peaks are absent. The latencies of the various potential components decreased with maturation. Wave V, evoked by 90 dB sensation level clicks, changed in latency from 7, 12 msec at 1-4 weeks of age to 5,77 msec at 3 years of life. The auditory processes related to peripheral and central transmission were shown to mature at differential rates during the first period of life. By the 6th month, in fact, wave I latency had reached the adult value; in contrast, wave V latency did match that of the adult until approximately 1 year old. One obvious explanation for the age-related latency shift is progressive myelination of the auditory tract in infants, for this is know to occur. The authors conclude that the clinical application of this technique in paediatric patients couldn't provide reliable informations about auditory brain stem activity regardless of evaluation of the relationship between age and characteristics of BSER.
Inter-subject synchronization of brain responses during natural music listening
Abrams, Daniel A.; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J.; Menon, Vinod
2015-01-01
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real-world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. PMID:23578016
A sLORETA study for gaze-independent BCI speller.
Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming
2017-07-01
EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.
Fröhlich, F; Burrello, T N; Mellin, J M; Cordle, A L; Lustenberger, C M; Gilmore, J H; Jarskog, L F
2016-03-01
Auditory hallucinations are resistant to pharmacotherapy in about 25% of adults with schizophrenia. Treatment with noninvasive brain stimulation would provide a welcomed additional tool for the clinical management of auditory hallucinations. A recent study found a significant reduction in auditory hallucinations in people with schizophrenia after five days of twice-daily transcranial direct current stimulation (tDCS) that simultaneously targeted left dorsolateral prefrontal cortex and left temporo-parietal cortex. We hypothesized that once-daily tDCS with stimulation electrodes over left frontal and temporo-parietal areas reduces auditory hallucinations in patients with schizophrenia. We performed a randomized, double-blind, sham-controlled study that evaluated five days of daily tDCS of the same cortical targets in 26 outpatients with schizophrenia and schizoaffective disorder with auditory hallucinations. We found a significant reduction in auditory hallucinations measured by the Auditory Hallucination Rating Scale (F2,50=12.22, P<0.0001) that was not specific to the treatment group (F2,48=0.43, P=0.65). No significant change of overall schizophrenia symptom severity measured by the Positive and Negative Syndrome Scale was observed. The lack of efficacy of tDCS for treatment of auditory hallucinations and the pronounced response in the sham-treated group in this study contrasts with the previous finding and demonstrates the need for further optimization and evaluation of noninvasive brain stimulation strategies. In particular, higher cumulative doses and higher treatment frequencies of tDCS together with strategies to reduce placebo responses should be investigated. Additionally, consideration of more targeted stimulation to engage specific deficits in temporal organization of brain activity in patients with auditory hallucinations may be warranted. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Visual and auditory steady-state responses in attention-deficit/hyperactivity disorder.
Khaleghi, Ali; Zarafshan, Hadi; Mohammadi, Mohammad Reza
2018-05-22
We designed a study to investigate the patterns of the steady-state visual evoked potential (SSVEP) and auditory steady-state response (ASSR) in adolescents with attention-deficit/hyperactivity disorder (ADHD) when performing a motor response inhibition task. Thirty 12- to 18-year-old adolescents with ADHD and 30 healthy control adolescents underwent an electroencephalogram (EEG) examination during steady-state stimuli when performing a stop-signal task. Then, we calculated the amplitude and phase of the steady-state responses in both visual and auditory modalities. Results showed that adolescents with ADHD had a significantly poorer performance in the stop-signal task during both visual and auditory stimuli. The SSVEP amplitude of the ADHD group was larger than that of the healthy control group in most regions of the brain, whereas the ASSR amplitude of the ADHD group was smaller than that of the healthy control group in some brain regions (e.g., right hemisphere). In conclusion, poorer task performance (especially inattention) and neurophysiological results in ADHD demonstrate a possible impairment in the interconnection of the association cortices in the parietal and temporal lobes and the prefrontal cortex. Also, the motor control problems in ADHD may arise from neural deficits in the frontoparietal and occipitoparietal systems and other brain structures such as cerebellum.
Emberson, Lauren L.; Cannon, Grace; Palmeri, Holly; Richards, John E.; Aslin, Richard N.
2016-01-01
How does the developing brain respond to recent experience? Repetition suppression (RS) is a robust and well-characterized response of to recent experience found, predominantly, in the perceptual cortices of the adult brain. We use functional near-infrared spectroscopy (fNIRS) to investigate how perceptual (temporal and occipital) and frontal cortices in the infant brain respond to auditory and visual stimulus repetitions (spoken words and faces). In Experiment 1, we find strong evidence of repetition suppression in the frontal cortex but only for auditory stimuli. In perceptual cortices, we find only suggestive evidence of auditory RS in the temporal cortex and no evidence of visual RS in any ROI. In Experiments 2 and 3, we replicate and extend these findings. Overall, we provide the first evidence that infant and adult brains respond differently to stimulus repetition. We suggest that the frontal lobe may support the development of RS in perceptual cortices. PMID:28012401
Plastic brain mechanisms for attaining auditory temporal order judgment proficiency.
Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas
2010-04-15
Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity. Copyright 2010 Elsevier Inc. All rights reserved.
Auditory neuroimaging with fMRI and PET.
Talavage, Thomas M; Gonzalez-Castillo, Javier; Scott, Sophie K
2014-01-01
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Lustenberger, Caroline; Patel, Yogi A; Alagapan, Sankaraleengam; Page, Jessica M; Price, Betsy; Boyle, Michael R; Fröhlich, Flavio
2018-04-01
Auditory rhythmic sensory stimulation modulates brain oscillations by increasing phase-locking to the temporal structure of the stimuli and by increasing the power of specific frequency bands, resulting in Auditory Steady State Responses (ASSR). The ASSR is altered in different diseases of the central nervous system such as schizophrenia. However, in order to use the ASSR as biological markers for disease states, it needs to be understood how different vigilance states and underlying brain activity affect the ASSR. Here, we compared the effects of auditory rhythmic stimuli on EEG brain activity during wake and NREM sleep, investigated the influence of the presence of dominant sleep rhythms on the ASSR, and delineated the topographical distribution of these modulations. Participants (14 healthy males, 20-33 years) completed on the same day a 60 min nap session and two 30 min wakefulness sessions (before and after the nap). During these sessions, amplitude modulated (AM) white noise auditory stimuli at different frequencies were applied. High-density EEG was continuously recorded and time-frequency analyses were performed to assess ASSR during wakefulness and NREM periods. Our analysis revealed that depending on the electrode location, stimulation frequency applied and window/frequencies analysed the ASSR was significantly modulated by sleep pressure (before and after sleep), vigilance state (wake vs. NREM sleep), and the presence of slow wave activity and sleep spindles. Furthermore, AM stimuli increased spindle activity during NREM sleep but not during wakefulness. Thus, (1) electrode location, sleep history, vigilance state and ongoing brain activity needs to be carefully considered when investigating ASSR and (2) auditory rhythmic stimuli during sleep might represent a powerful tool to boost sleep spindles. Copyright © 2017 Elsevier Inc. All rights reserved.
Auditory Neuroimaging with fMRI and PET
Talavage, Thomas M.; Gonzalez-Castillo, Javier; Scott, Sophie K.
2013-01-01
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. PMID:24076424
Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A
2005-12-07
Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
Behavioural benefits of multisensory processing in ferrets.
Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R
2017-01-01
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Sheridan, Carolin J.; Matuz, Tamara; Draganova, Rossitza; Eswaran, Hari; Preissl, Hubert
2010-01-01
Fetal magnetoencephalography (fMEG) is the only non-invasive method for investigating evoked brain responses and spontaneous brain activity generated by the fetus "in utero". Fetal auditory as well as visual-evoked fields have been successfully recorded in basic stimulus-response studies. Moreover, paradigms investigating precursors for cognitive…
Kamal, Brishna; Holman, Constance; de Villers-Sidani, Etienne
2013-01-01
Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function. PMID:24062649
Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav
2018-03-01
Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Student, M.; Sohmer, H.
1978-01-01
In an attempt to resolve the question as to whether children with autistic traits have an organic nervous system lesion, auditory nerve and brainstem evoked responses were recorded in a group of 15 children (4 to 12 years old) with autistic traits. (Author)
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing
Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.
Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Plasticity in neuromagnetic cortical responses suggests enhanced auditory object representation
2013-01-01
Background Auditory perceptual learning persistently modifies neural networks in the central nervous system. Central auditory processing comprises a hierarchy of sound analysis and integration, which transforms an acoustical signal into a meaningful object for perception. Based on latencies and source locations of auditory evoked responses, we investigated which stage of central processing undergoes neuroplastic changes when gaining auditory experience during passive listening and active perceptual training. Young healthy volunteers participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable ‘ba’, which is an unusual speech sound to English listeners. Magnetoencephalographic (MEG) brain responses were recorded during two pre-training and one post-training sessions. Underlying cortical sources were localized, and the temporal dynamics of auditory evoked responses were analyzed. Results After both passive listening and active training, the amplitude of the P2m wave with latency of 200 ms increased considerably. By this latency, the integration of stimulus features into an auditory object for further conscious perception is considered to be complete. Therefore the P2m changes were discussed in the light of auditory object representation. Moreover, P2m sources were localized in anterior auditory association cortex, which is part of the antero-ventral pathway for object identification. The amplitude of the earlier N1m wave, which is related to processing of sensory information, did not change over the time course of the study. Conclusion The P2m amplitude increase and its persistence over time constitute a neuroplastic change. The P2m gain likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time. Different trajectories of brain and behaviour changes suggest that the preceding effect of a P2m increase relates to brain processes, which are necessary precursors of perceptual learning. Cautious discussion is required when interpreting the finding of a P2 amplitude increase between recordings before and after training and learning. PMID:24314010
Inter-subject synchronization of brain responses during natural music listening.
Abrams, Daniel A; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J; Menon, Vinod
2013-05-01
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic 'real-world' music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Valéry, Benoît; Scannella, Sébastien; Peysakhovich, Vsevolod; Barone, Pascal; Causse, Mickaël
2017-07-01
In the aeronautics field, some authors have suggested that an aircraft's attitude sonification could be used by pilots to cope with spatial disorientation situations. Such a system is currently used by blind pilots to control the attitude of their aircraft. However, given the suspected higher auditory attentional capacities of blind people, the possibility for sighted individuals to use this system remains an open question. For example, its introduction may overload the auditory channel, which may in turn alter the responsiveness of pilots to infrequent but critical auditory warnings. In this study, two groups of pilots (blind versus sighted) performed a simulated flight experiment consisting of successive aircraft maneuvers, on the sole basis of an aircraft sonification. Maneuver difficulty was varied while we assessed flight performance along with subjective and electroencephalographic (EEG) measures of workload. The results showed that both groups of participants reached target-attitudes with a good accuracy. However, more complex maneuvers increased subjective workload and impaired brain responsiveness toward unexpected auditory stimuli as demonstrated by lower N1 and P3 amplitudes. Despite that the EEG signal showed a clear reorganization of the brain in the blind participants (higher alpha power), the brain responsiveness to unexpected auditory stimuli was not significantly different between the two groups. The results suggest that an auditory display might provide useful additional information to spatially disoriented pilots with normal vision. However, its use should be restricted to critical situations and simple recovery or guidance maneuvers. Copyright © 2017 Elsevier Ltd. All rights reserved.
A longitudinal study of auditory evoked field and language development in young children.
Yoshimura, Yuko; Kikuchi, Mitsuru; Ueno, Sanae; Shitamichi, Kiyomi; Remijn, Gerard B; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Furutani, Naoki; Oi, Manabu; Munesue, Toshio; Tsubokawa, Tsunehisa; Higashida, Haruhiro; Minabe, Yoshio
2014-11-01
The relationship between language development in early childhood and the maturation of brain functions related to the human voice remains unclear. Because the development of the auditory system likely correlates with language development in young children, we investigated the relationship between the auditory evoked field (AEF) and language development using non-invasive child-customized magnetoencephalography (MEG) in a longitudinal design. Twenty typically developing children were recruited (aged 36-75 months old at the first measurement). These children were re-investigated 11-25 months after the first measurement. The AEF component P1m was examined to investigate the developmental changes in each participant's neural brain response to vocal stimuli. In addition, we examined the relationships between brain responses and language performance. P1m peak amplitude in response to vocal stimuli significantly increased in both hemispheres in the second measurement compared to the first measurement. However, no differences were observed in P1m latency. Notably, our results reveal that children with greater increases in P1m amplitude in the left hemisphere performed better on linguistic tests. Thus, our results indicate that P1m evoked by vocal stimuli is a neurophysiological marker for language development in young children. Additionally, MEG is a technique that can be used to investigate the maturation of the auditory cortex based on auditory evoked fields in young children. This study is the first to demonstrate a significant relationship between the development of the auditory processing system and the development of language abilities in young children. Copyright © 2014 Elsevier Inc. All rights reserved.
fMRI during natural sleep as a method to study brain function during early childhood.
Redcay, Elizabeth; Kennedy, Daniel P; Courchesne, Eric
2007-12-01
Many techniques to study early functional brain development lack the whole-brain spatial resolution that is available with fMRI. We utilized a relatively novel method in which fMRI data were collected from children during natural sleep. Stimulus-evoked responses to auditory and visual stimuli as well as stimulus-independent functional networks were examined in typically developing 2-4-year-old children. Reliable fMRI data were collected from 13 children during presentation of auditory stimuli (tones, vocal sounds, and nonvocal sounds) in a block design. Twelve children were presented with visual flashing lights at 2.5 Hz. When analyses combined all three types of auditory stimulus conditions as compared to rest, activation included bilateral superior temporal gyri/sulci (STG/S) and right cerebellum. Direct comparisons between conditions revealed significantly greater responses to nonvocal sounds and tones than to vocal sounds in a number of brain regions including superior temporal gyrus/sulcus, medial frontal cortex and right lateral cerebellum. The response to visual stimuli was localized to occipital cortex. Furthermore, stimulus-independent functional connectivity MRI analyses (fcMRI) revealed functional connectivity between STG and other temporal regions (including contralateral STG) and medial and lateral prefrontal regions. Functional connectivity with an occipital seed was localized to occipital and parietal cortex. In sum, 2-4 year olds showed a differential fMRI response both between stimulus modalities and between stimuli in the auditory modality. Furthermore, superior temporal regions showed functional connectivity with numerous higher-order regions during sleep. We conclude that the use of sleep fMRI may be a valuable tool for examining functional brain organization in young children.
Estradiol-dependent modulation of auditory processing and selectivity in songbirds
Maney, Donna; Pinaud, Raphael
2011-01-01
The steroid hormone estradiol plays an important role in reproductive development and behavior and modulates a wide array of physiological and cognitive processes. Recently, reports from several research groups have converged to show that estradiol also powerfully modulates sensory processing, specifically, the physiology of central auditory circuits in songbirds. These investigators have discovered that (1) behaviorally-relevant auditory experience rapidly increases estradiol levels in the auditory forebrain; (2) estradiol instantaneously enhances the responsiveness and coding efficiency of auditory neurons; (3) these changes are mediated by a non-genomic effect of brain-generated estradiol on the strength of inhibitory neurotransmission; and (4) estradiol regulates biochemical cascades that induce the expression of genes involved in synaptic plasticity. Together, these findings have established estradiol as a central regulator of auditory function and intensified the need to consider brain-based mechanisms, in addition to peripheral organ dysfunction, in hearing pathologies associated with estrogen deficiency. PMID:21146556
Sound envelope processing in the developing human brain: A MEG study.
Tang, Huizhen; Brock, Jon; Johnson, Blake W
2016-02-01
This study investigated auditory cortical processing of linguistically-relevant temporal modulations in the developing brains of young children. Auditory envelope following responses to white noise amplitude modulated at rates of 1-80 Hz in healthy children (aged 3-5 years) and adults were recorded using a paediatric magnetoencephalography (MEG) system and a conventional MEG system, respectively. For children, there were envelope following responses to slow modulations but no significant responses to rates higher than about 25 Hz, whereas adults showed significant envelope following responses to almost the entire range of stimulus rates. Our results show that the auditory cortex of preschool-aged children has a sharply limited capacity to process rapid amplitude modulations in sounds, as compared to the auditory cortex of adults. These neurophysiological results are consistent with previous psychophysical evidence for a protracted maturational time course for auditory temporal processing. The findings are also in good agreement with current linguistic theories that posit a perceptual bias for low frequency temporal information in speech during language acquisition. These insights also have clinical relevance for our understanding of language disorders that are associated with difficulties in processing temporal information in speech. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing
2016-09-01
The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient’s state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.
Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing
2016-09-13
The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient's state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.
A corollary discharge maintains auditory sensitivity during sound production
NASA Astrophysics Data System (ADS)
Poulet, James F. A.; Hedwig, Berthold
2002-08-01
Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.
Martin, R; Simon, E; Simon-Oppermann, C
1981-01-01
1. Thermodes were chronically implanted into various levels of the brain stem of sixteen Pekin ducks. The effects of local thermal stimulation on metabolic heat production, core temperature, peripheral skin temperature and respiratory frequency were investigated. 2. Four areas of thermode positions were determined according to the responses observed and were histologically identified at the end of the investigation. 3. Thermal stimulation of the lower mid-brain/upper pontine brain stem (Pos. III) elicited an increase in metabolic heat production, cutaneous vasoconstriction and rises in core temperature in response to cooling at thermoneutral and cold ambient conditions and, further, inhibition of panting by cooling and activation of panting by heating at warm ambient conditions. The metabolic response to cooling this brain stem section amounted to -0.1 W/kg. degrees C as compared with -7 W/kg. degrees C in response to total body cooling. 4. Cooling of the anterior and middle hypothalamus (Pos. II) caused vasodilatation in the skin and did not elicit shivering. The resulting drop in core temperature at a given degree of cooling was greater than the rise in core temperature in response to equivalent cooling of the lower mid-brain/upper pontine brain stem. 5. Cooling of the preoptic forebrain (Pos. I) and of the myelencephalon (Pos. IV) did not elicit thermoregulatory reactions. 6. It is concluded that the duck's brain stem contains thermoreceptive structures in the lower mid-brain/upper pontine section. However, the brain stem as a whole appears to contribute little to cold defence during general hypothermia because of the inhibitory effects originating in the anterior and middle hypothalamus. Cold defence in the duck, which is comparable in strength to that in mammals, has to rely on extracerebral thermosensory structures. PMID:7310688
ERIC Educational Resources Information Center
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W.
2013-01-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in…
BALDEY: A database of auditory lexical decisions.
Ernestus, Mirjam; Cutler, Anne
2015-01-01
In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.
[Characterization of stem cells derived from the neonatal auditory sensory epithelium].
Diensthuber, M; Heller, S
2010-11-01
In contrast to regenerating hair cell-bearing organs of nonmammalian vertebrates the adult mammalian organ of Corti appears to have lost its ability to maintain stem cells. The result is a lack of regenerative ability and irreversible hearing loss following auditory hair cell death. Unexpectedly, the neonatal auditory sensory epithelium has recently been shown to harbor cells with stem cell features. The origin of these cells within the cochlea's sensory epithelium is unknown. We applied a modified neurosphere assay to identify stem cells within distinct subregions of the neonatal mouse auditory sensory epithelium. Sphere cells were characterized by multiple markers and morphologic techniques. Our data reveal that both the greater and the lesser epithelial ridge contribute to the sphere-forming stem cell population derived from the auditory sensory epithelium. These self-renewing sphere cells express a variety of markers for neural and otic progenitor cells and mature inner ear cell types. Stem cells can be isolated from specific regions of the auditory sensory epithelium. The distinct features of these cells imply a potential application in the development of a cell replacement therapy to regenerate the damaged sensory epithelium.
A novel hybrid auditory BCI paradigm combining ASSR and P300.
Kaongoen, Netiwit; Jo, Sungho
2017-03-01
Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.
The effects of divided attention on auditory priming.
Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W
2007-09-01
Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.
Bidelman, Gavin M; Dexter, Lauren
2015-04-01
We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs. Copyright © 2015 Elsevier Inc. All rights reserved.
Kaiser, Andreas; Kale, Ajay; Novozhilova, Ekaterina; Siratirakun, Piyaporn; Aquino, Jorge B; Thonabulsombat, Charoensri; Ernfors, Patrik; Olivius, Petri
2014-05-30
Conditioned medium (CM), made by collecting medium after a few days in cell culture and then re-using it to further stimulate other cells, is a known experimental concept since the 1950s. Our group has explored this technique to stimulate the performance of cells in culture in general, and to evaluate stem- and progenitor cell aptitude for auditory nerve repair enhancement in particular. As compared to other mediums, all primary endpoints in our published experimental settings have weighed in favor of conditioned culture medium, where we have shown that conditioned culture medium has a stimulatory effect on cell survival. In order to explore the reasons for this improved survival we set out to analyze the conditioned culture medium. We utilized ELISA kits to investigate whether brain stem (BS) slice CM contains any significant amounts of brain-derived neurotrophic factor (BDNF) and glial cell derived neurotrophic factor (GDNF). We further looked for a donor cell with progenitor characteristics that would be receptive to BDNF and GDNF. We chose the well-documented boundary cap (BC) progenitor cells to be tested in our in vitro co-culture setting together with cochlear nucleus (CN) of the BS. The results show that BS CM contains BDNF and GDNF and that survival of BC cells, as well as BC cell differentiation into neurons, were enhanced when BS CM were used. Altogether, we conclude that BC cells transplanted into a BDNF and GDNF rich environment could be suitable for treatment of a traumatized or degenerated auditory nerve. Copyright © 2014 Elsevier B.V. All rights reserved.
Rowe, James B.; Ghosh, Boyd C. P.; Carlyon, Robert P.; Plack, Christopher J.; Gockel, Hedwig E.
2014-01-01
Under binaural listening conditions, the detection of target signals within background masking noise is substantially improved when the interaural phase of the target differs from that of the masker. Neural correlates of this binaural masking level difference (BMLD) have been observed in the inferior colliculus and temporal cortex, but it is not known whether degeneration of the inferior colliculus would result in a reduction of the BMLD in humans. We used magnetoencephalography to examine the BMLD in 13 healthy adults and 13 patients with progressive supranuclear palsy (PSP). PSP is associated with severe atrophy of the upper brain stem, including the inferior colliculus, confirmed by voxel-based morphometry of structural MRI. Stimuli comprised in-phase sinusoidal tones presented to both ears at three levels (high, medium, and low) masked by in-phase noise, which rendered the low-level tone inaudible. Critically, the BMLD was measured using a low-level tone presented in opposite phase across ears, making it audible against the noise. The cortical waveforms from bilateral auditory sources revealed significantly larger N1m peaks for the out-of-phase low-level tone compared with the in-phase low-level tone, for both groups, indicating preservation of early cortical correlates of the BMLD in PSP. In PSP a significant delay was observed in the onset of the N1m deflection and the amplitude of the P2m was reduced, but these differences were not restricted to the BMLD condition. The results demonstrate that although PSP causes subtle auditory deficits, binaural processing can survive the presence of significant damage to the upper brain stem. PMID:25231610
Hughes, Laura E; Rowe, James B; Ghosh, Boyd C P; Carlyon, Robert P; Plack, Christopher J; Gockel, Hedwig E
2014-12-15
Under binaural listening conditions, the detection of target signals within background masking noise is substantially improved when the interaural phase of the target differs from that of the masker. Neural correlates of this binaural masking level difference (BMLD) have been observed in the inferior colliculus and temporal cortex, but it is not known whether degeneration of the inferior colliculus would result in a reduction of the BMLD in humans. We used magnetoencephalography to examine the BMLD in 13 healthy adults and 13 patients with progressive supranuclear palsy (PSP). PSP is associated with severe atrophy of the upper brain stem, including the inferior colliculus, confirmed by voxel-based morphometry of structural MRI. Stimuli comprised in-phase sinusoidal tones presented to both ears at three levels (high, medium, and low) masked by in-phase noise, which rendered the low-level tone inaudible. Critically, the BMLD was measured using a low-level tone presented in opposite phase across ears, making it audible against the noise. The cortical waveforms from bilateral auditory sources revealed significantly larger N1m peaks for the out-of-phase low-level tone compared with the in-phase low-level tone, for both groups, indicating preservation of early cortical correlates of the BMLD in PSP. In PSP a significant delay was observed in the onset of the N1m deflection and the amplitude of the P2m was reduced, but these differences were not restricted to the BMLD condition. The results demonstrate that although PSP causes subtle auditory deficits, binaural processing can survive the presence of significant damage to the upper brain stem. Copyright © 2014 the American Physiological Society.
Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2013-01-01
The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338
Using Auditory Steady State Responses to Outline the Functional Connectivity in the Tinnitus Brain
Schlee, Winfried; Weisz, Nathan; Bertrand, Olivier; Hartmann, Thomas; Elbert, Thomas
2008-01-01
Background Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. Methods and Findings Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. Conclusions To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus. PMID:19005566
Nonverbal auditory agnosia with lesion to Wernicke's area.
Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic
2010-01-01
We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.
Spéder, Pauline; Brand, Andrea H.
2014-01-01
Summary Neural stem cells in the adult brain exist primarily in a quiescent state but are reactivated in response to changing physiological conditions. How do stem cells sense and respond to metabolic changes? In the Drosophila CNS, quiescent neural stem cells are reactivated synchronously in response to a nutritional stimulus. Feeding triggers insulin production by blood-brain barrier glial cells, activating the insulin/insulin-like growth factor pathway in underlying neural stem cells and stimulating their growth and proliferation. Here we show that gap junctions in the blood-brain barrier glia mediate the influence of metabolic changes on stem cell behavior, enabling glia to respond to nutritional signals and reactivate quiescent stem cells. We propose that gap junctions in the blood-brain barrier are required to translate metabolic signals into synchronized calcium pulses and insulin secretion. PMID:25065772
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442
Figueiredo, Carolina Calsolari; de Andrade, Adriana Neves; Marangoni-Castan, Andréa Tortosa; Gil, Daniela; Suriano, Italo Capraro
2015-01-01
ABSTRACT Objective To investigate the long-term efficacy of acoustically controlled auditory training in adults after tarumatic brain injury. Methods A total of six audioogically normal individuals aged between 20 and 37 years were studied. They suffered severe traumatic brain injury with diffuse axional lesion and underwent an acoustically controlled auditory training program approximately one year before. The results obtained in the behavioral and electrophysiological evaluation of auditory processing immediately after acoustically controlled auditory training were compared to reassessment findings, one year later. Results Quantitative analysis of auditory brainsteim response showed increased absolute latency of all waves and interpeak intervals, bilaterraly, when comparing both evaluations. Moreover, increased amplitude of all waves, and the wave V amplitude was statistically significant for the right ear, and wave III for the left ear. As to P3, decreased latency and increased amplitude were found for both ears in reassessment. The previous and current behavioral assessment showed similar results, except for the staggered spondaic words in the left ear and the amount of errors on the dichotic consonant-vowel test. Conclusion The acoustically controlled auditory training was effective in the long run, since better latency and amplitude results were observed in the electrophysiological evaluation, in addition to stability of behavioral measures after one-year training. PMID:26676270
Paavilainen, P; Simola, J; Jaramillo, M; Näätänen, R; Winkler, I
2001-03-01
Brain mechanisms extracting invariant information from varying auditory inputs were studied using the mismatch-negativity (MMN) brain response. We wished to determine whether the preattentive sound-analysis mechanisms, reflected by MMN, are capable of extracting invariant relationships based on abstract conjunctions between two sound features. The standard stimuli varied over a large range in frequency and intensity dimensions following the rule that the higher the frequency, the louder the intensity. The occasional deviant stimuli violated this frequency-intensity relationship and elicited an MMN. The results demonstrate that preattentive processing of auditory stimuli extends to unexpectedly complex relationships between the stimulus features.
Oscillatory frontal theta responses are increased upon bisensory stimulation.
Sakowitz, O W; Schürmann, M; Başar, E
2000-05-01
To investigate the functional correlation of oscillatory EEG components with the interaction of sensory modalities following simultaneous audio-visual stimulation. In an experimental study (15 subjects) we compared auditory evoked potentials (AEPs) and visual evoked potentials (VEPs) to bimodal evoked potentials (BEPs; simultaneous auditory and visual stimulation). BEPs were assumed to be brain responses to complex stimuli as a marker for intermodal associative functioning. Frequency domain analysis of these EPs showed marked theta-range components in response to bimodal stimulation. These theta components could not be explained by linear addition of the unimodal responses in the time domain. Considering topography the increased theta-response showed a remarkable frontality in proximity to multimodal association cortices. Referring to methodology we try to demonstrate that, even if various behavioral correlates of brain oscillations exist, common patterns can be extracted by means of a systems-theoretical approach. Serving as an example of functionally relevant brain oscillations, theta responses could be interpreted as an indicator of associative information processing.
Recovery function of the human brain stem auditory-evoked potential.
Kevanishvili, Z; Lagidze, Z
1979-01-01
Amplitude reduction and peak latency prolongation were observed in the human brain stem auditory-evoked potential (BEP) with preceding (conditioning) stimulation. At a conditioning interval (CI) of 5 ms the alteration of BEP was greater than at a CI of 10 ms. At a CI of 10 ms the amplitudes of some BEP components (e.g. waves I and II) were more decreased than those of others (e.g. wave V), while the peak latency prolongation did not show any obvious component selectivity. At a CI of 5 ms, the extent of the amplitude decrement of individual BEP components differed less, while the increase in the peak latencies of the later components was greater than that of the earlier components. The alterations of the parameters of the test BEPs at both CIs are ascribed to the desynchronization of intrinsic neural events. The differential amplitude reduction at a CI of 10 ms is explained by the different durations of neural firings determining various effects of desynchronization upon the amplitudes of individual BEP components. The decrease in the extent of the component selectivity and the preferential increase in the peak latencies of the later BEP components observed at a CI of 5 ms are explained by the intensification of the mechanism of the relative refractory period.
Physiological modulators of Kv3.1 channels adjust firing patterns of auditory brain stem neurons.
Brown, Maile R; El-Hassar, Lynda; Zhang, Yalan; Alvaro, Giuseppe; Large, Charles H; Kaczmarek, Leonard K
2016-07-01
Many rapidly firing neurons, including those in the medial nucleus of the trapezoid body (MNTB) in the auditory brain stem, express "high threshold" voltage-gated Kv3.1 potassium channels that activate only at positive potentials and are required for stimuli to generate rapid trains of actions potentials. We now describe the actions of two imidazolidinedione derivatives, AUT1 and AUT2, which modulate Kv3.1 channels. Using Chinese hamster ovary cells stably expressing rat Kv3.1 channels, we found that lower concentrations of these compounds shift the voltage of activation of Kv3.1 currents toward negative potentials, increasing currents evoked by depolarization from typical neuronal resting potentials. Single-channel recordings also showed that AUT1 shifted the open probability of Kv3.1 to more negative potentials. Higher concentrations of AUT2 also shifted inactivation to negative potentials. The effects of lower and higher concentrations could be mimicked in numerical simulations by increasing rates of activation and inactivation respectively, with no change in intrinsic voltage dependence. In brain slice recordings of mouse MNTB neurons, both AUT1 and AUT2 modulated firing rate at high rates of stimulation, a result predicted by numerical simulations. Our results suggest that pharmaceutical modulation of Kv3.1 currents represents a novel avenue for manipulation of neuronal excitability and has the potential for therapeutic benefit in the treatment of hearing disorders. Copyright © 2016 the American Physiological Society.
Physiological modulators of Kv3.1 channels adjust firing patterns of auditory brain stem neurons
Brown, Maile R.; El-Hassar, Lynda; Zhang, Yalan; Alvaro, Giuseppe; Large, Charles H.
2016-01-01
Many rapidly firing neurons, including those in the medial nucleus of the trapezoid body (MNTB) in the auditory brain stem, express “high threshold” voltage-gated Kv3.1 potassium channels that activate only at positive potentials and are required for stimuli to generate rapid trains of actions potentials. We now describe the actions of two imidazolidinedione derivatives, AUT1 and AUT2, which modulate Kv3.1 channels. Using Chinese hamster ovary cells stably expressing rat Kv3.1 channels, we found that lower concentrations of these compounds shift the voltage of activation of Kv3.1 currents toward negative potentials, increasing currents evoked by depolarization from typical neuronal resting potentials. Single-channel recordings also showed that AUT1 shifted the open probability of Kv3.1 to more negative potentials. Higher concentrations of AUT2 also shifted inactivation to negative potentials. The effects of lower and higher concentrations could be mimicked in numerical simulations by increasing rates of activation and inactivation respectively, with no change in intrinsic voltage dependence. In brain slice recordings of mouse MNTB neurons, both AUT1 and AUT2 modulated firing rate at high rates of stimulation, a result predicted by numerical simulations. Our results suggest that pharmaceutical modulation of Kv3.1 currents represents a novel avenue for manipulation of neuronal excitability and has the potential for therapeutic benefit in the treatment of hearing disorders. PMID:27052580
Decoding Articulatory Features from fMRI Responses in Dorsal Speech Regions.
Correia, Joao M; Jansma, Bernadette M B; Bonte, Milene
2015-11-11
The brain's circuitry for perceiving and producing speech may show a notable level of overlap that is crucial for normal development and behavior. The extent to which sensorimotor integration plays a role in speech perception remains highly controversial, however. Methodological constraints related to experimental designs and analysis methods have so far prevented the disentanglement of neural responses to acoustic versus articulatory speech features. Using a passive listening paradigm and multivariate decoding of single-trial fMRI responses to spoken syllables, we investigated brain-based generalization of articulatory features (place and manner of articulation, and voicing) beyond their acoustic (surface) form in adult human listeners. For example, we trained a classifier to discriminate place of articulation within stop syllables (e.g., /pa/ vs /ta/) and tested whether this training generalizes to fricatives (e.g., /fa/ vs /sa/). This novel approach revealed generalization of place and manner of articulation at multiple cortical levels within the dorsal auditory pathway, including auditory, sensorimotor, motor, and somatosensory regions, suggesting the representation of sensorimotor information. Additionally, generalization of voicing included the right anterior superior temporal sulcus associated with the perception of human voices as well as somatosensory regions bilaterally. Our findings highlight the close connection between brain systems for speech perception and production, and in particular, indicate the availability of articulatory codes during passive speech perception. Sensorimotor integration is central to verbal communication and provides a link between auditory signals of speech perception and motor programs of speech production. It remains highly controversial, however, to what extent the brain's speech perception system actively uses articulatory (motor), in addition to acoustic/phonetic, representations. In this study, we examine the role of articulatory representations during passive listening using carefully controlled stimuli (spoken syllables) in combination with multivariate fMRI decoding. Our approach enabled us to disentangle brain responses to acoustic and articulatory speech properties. In particular, it revealed articulatory-specific brain responses of speech at multiple cortical levels, including auditory, sensorimotor, and motor regions, suggesting the representation of sensorimotor information during passive speech perception. Copyright © 2015 the authors 0270-6474/15/3515015-11$15.00/0.
Altered auditory function in rats exposed to hypergravic fields
NASA Technical Reports Server (NTRS)
Jones, T. A.; Hoffman, L.; Horowitz, J. M.
1982-01-01
The effect of an orthodynamic hypergravic field of 6 G on the brainstem auditory projections was studied in rats. The brain temperature and EEG activity were recorded in the rats during 6 G orthodynamic acceleration and auditory brainstem responses were used to monitor auditory function. Results show that all animals exhibited auditory brainstem responses which indicated impaired conduction and transmission of brainstem auditory signals during the exposure to the 6 G acceleration field. Significant increases in central conduction time were observed for peaks 3N, 4P, 4N, and 5P (N = negative, P = positive), while the absolute latency values for these same peaks were also significantly increased. It is concluded that these results, along with those for fields below 4 G (Jones and Horowitz, 1981), indicate that impaired function proceeds in a rostro-caudal progression as field strength is increased.
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
A Stem Cell-Seeded Nanofibrous Scaffold for Auditory Nerve Replacement
2015-10-01
guinea pigs . Initial results show improved electrically-evoked auditory brainstem responses in cell-seeded implants compared to control, cell-free...scaffold’s conduit, but the IAM of the guinea pig and limits imposed by the surgical approach make this difficult. Alternatives are being pursued...transplantation of the seeded nanofibrous scaffold Task 13. Group 1: Pilot deafening. Confirm efficacy of ß-bungarotoxin in guinea pig and time point of
Baseline vestibular and auditory findings in a trial of post-concussive syndrome
Meehan, Anna; Searing, Elizabeth; Weaver, Lindell; Lewandowski, Andrew
2016-01-01
Previous studies have reported high rates of auditory and vestibular-balance deficits immediately following head injury. This study uses a comprehensive battery of assessments to characterize auditory and vestibular function in 71 U.S. military service members with chronic symptoms following mild traumatic brain injury that did not resolve with traditional interventions. The majority of the study population reported hearing loss (70%) and recent vestibular symptoms (83%). Central auditory deficits were most prevalent, with 58% of participants failing the SCAN3:A screening test and 45% showing abnormal responses on auditory steady-state response testing presented at a suprathreshold intensity. Only 17% of the participants had abnormal hearing (⟩25 dB hearing loss) based on the pure-tone average. Objective vestibular testing supported significant deficits in this population, regardless of whether the participant self-reported active symptoms. Composite score on the Sensory Organization Test was lower than expected from normative data (mean 69.6 ±vestibular tests, vestibulo-ocular reflex, central auditory dysfunction, mild traumatic brain injury, post-concussive symptoms, hearing15.6). High abnormality rates were found in funduscopy torsion (58%), oculomotor assessments (49%), ocular and cervical vestibular evoked myogenic potentials (46% and 33%, respectively), and monothermal calorics (40%). It is recommended that a full peripheral and central auditory, oculomotor, and vestibular-balance evaluation be completed on military service members who have sustained head trauma.
Visual influences on auditory spatial learning
King, Andrew J.
2008-01-01
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967
Neurological Diagnostic Tests and Procedures
... stem auditory evoked response ) are used to assess high-frequency hearing loss, diagnose any damage to the acoustic ... imaging , also called ultrasound scanning or sonography, uses high-frequency sound waves to obtain images inside the body. ...
Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans
Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro
2015-01-01
Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703
Brainstem auditory evoked responses in an equine patient population: part I--adult horses.
Aleman, M; Holliday, T A; Nieto, J E; Williams, D C
2014-01-01
Brainstem auditory evoked response has been an underused diagnostic modality in horses as evidenced by few reports on the subject. To describe BAER findings, common clinical signs, and causes of hearing loss in adult horses. Study group, 76 horses; control group, 8 horses. Retrospective. BAER records from the Clinical Neurophysiology Laboratory were reviewed from the years of 1982 to 2013. Peak latencies, amplitudes, and interpeak intervals were measured when visible. Horses were grouped under disease categories. Descriptive statistics and a posthoc Bonferroni test were performed. Fifty-seven of 76 horses had BAER deficits. There was no breed or sex predisposition, with the exception of American Paint horses diagnosed with congenital sensorineural deafness. Eighty-six percent (n = 49/57) of the horses were younger than 16 years of age. The most common causes of BAER abnormalities were temporohyoid osteoarthropathy (THO, n = 20/20; abnormalities/total), congenital sensorineural deafness in Paint horses (17/17), multifocal brain disease (13/16), and otitis media/interna (4/4). Auditory loss was bilateral and unilateral in 74% (n = 42/57) and 26% (n = 15/57) of the horses, respectively. The most common causes of bilateral auditory loss were sensorineural deafness, THO, and multifocal brain disease whereas THO and otitis were the most common causes of unilateral deficits. Auditory deficits should be investigated in horses with altered behavior, THO, multifocal brain disease, otitis, and in horses with certain coat and eye color patterns. BAER testing is an objective and noninvasive diagnostic modality to assess auditory function in horses. Copyright © 2014 by the American College of Veterinary Internal Medicine.
Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude
Halder, Sebastian; Hammer, Eva Maria; Kleih, Sonja Claudia; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea
2013-01-01
Objective Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Methods Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Results Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Conclusions Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Significance Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population. PMID:23457444
Brown, Trecia A; Joanisse, Marc F; Gati, Joseph S; Hughes, Sarah M; Nixon, Pam L; Menon, Ravi S; Lomber, Stephen G
2013-01-01
Much of what is known about the cortical organization for audition in humans draws from studies of auditory cortex in the cat. However, these data build largely on electrophysiological recordings that are both highly invasive and provide less evidence concerning macroscopic patterns of brain activation. Optical imaging, using intrinsic signals or dyes, allows visualization of surface-based activity but is also quite invasive. Functional magnetic resonance imaging (fMRI) overcomes these limitations by providing a large-scale perspective of distributed activity across the brain in a non-invasive manner. The present study used fMRI to characterize stimulus-evoked activity in auditory cortex of an anesthetized (ketamine/isoflurane) cat, focusing specifically on the blood-oxygen-level-dependent (BOLD) signal time course. Functional images were acquired for adult cats in a 7 T MRI scanner. To determine the BOLD signal time course, we presented 1s broadband noise bursts between widely spaced scan acquisitions at randomized delays (1-12 s in 1s increments) prior to each scan. Baseline trials in which no stimulus was presented were also acquired. Our results indicate that the BOLD response peaks at about 3.5s in primary auditory cortex (AI) and at about 4.5 s in non-primary areas (AII, PAF) of cat auditory cortex. The observed peak latency is within the range reported for humans and non-human primates (3-4 s). The time course of hemodynamic activity in cat auditory cortex also occurs on a comparatively shorter scale than in cat visual cortex. The results of this study will provide a foundation for future auditory fMRI studies in the cat to incorporate these hemodynamic response properties into appropriate analyses of cat auditory cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns
Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Valente, Giancarlo; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia
2017-01-01
Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2–4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice). PMID:28420788
Optimal resource allocation for novelty detection in a human auditory memory.
Sinkkonen, J; Kaski, S; Huotilainen, M; Ilmoniemi, R J; Näätänen, R; Kaila, K
1996-11-04
A theory of resource allocation for neuronal low-level filtering is presented, based on an analysis of optimal resource allocation in simple environments. A quantitative prediction of the theory was verified in measurements of the magnetic mismatch response (MMR), an auditory event-related magnetic response of the human brain. The amplitude of the MMR was found to be directly proportional to the information conveyed by the stimulus. To the extent that the amplitude of the MMR can be used to measure resource usage by the auditory cortex, this finding supports our theory that, at least for early auditory processing, energy resources are used in proportion to the information content of incoming stimulus flow.
Zhang, Guang-Wei; Sun, Wen-Jian; Zingg, Brian; Shen, Li; He, Jufang; Xiong, Ying; Tao, Huizhong W; Zhang, Li I
2018-01-17
In the mammalian brain, auditory information is known to be processed along a central ascending pathway leading to auditory cortex (AC). Whether there exist any major pathways beyond this canonical auditory neuraxis remains unclear. In awake mice, we found that auditory responses in entorhinal cortex (EC) cannot be explained by a previously proposed relay from AC based on response properties. By combining anatomical tracing and optogenetic/pharmacological manipulations, we discovered that EC received auditory input primarily from the medial septum (MS), rather than AC. A previously uncharacterized auditory pathway was then revealed: it branched from the cochlear nucleus, and via caudal pontine reticular nucleus, pontine central gray, and MS, reached EC. Neurons along this non-canonical auditory pathway responded selectively to high-intensity broadband noise, but not pure tones. Disruption of the pathway resulted in an impairment of specifically noise-cued fear conditioning. This reticular-limbic pathway may thus function in processing aversive acoustic signals. Copyright © 2017 Elsevier Inc. All rights reserved.
Emotional context enhances auditory novelty processing in superior temporal gyrus.
Domínguez-Borràs, Judith; Trautmann, Sina-Alexa; Erhard, Peter; Fehr, Thorsten; Herrmann, Manfred; Escera, Carles
2009-07-01
Visualizing emotionally loaded pictures intensifies peripheral reflexes toward sudden auditory stimuli, suggesting that the emotional context may potentiate responses elicited by novel events in the acoustic environment. However, psychophysiological results have reported that attentional resources available to sounds become depleted, as attention allocation to emotional pictures increases. These findings have raised the challenging question of whether an emotional context actually enhances or attenuates auditory novelty processing at a central level in the brain. To solve this issue, we used functional magnetic resonance imaging to first identify brain activations induced by novel sounds (NOV) when participants made a color decision on visual stimuli containing both negative (NEG) and neutral (NEU) facial expressions. We then measured modulation of these auditory responses by the emotional load of the task. Contrary to what was assumed, activation induced by NOV in superior temporal gyrus (STG) was enhanced when subjects responded to faces with a NEG emotional expression compared with NEU ones. Accordingly, NOV yielded stronger behavioral disruption on subjects' performance in the NEG context. These results demonstrate that the emotional context modulates the excitability of auditory and possibly multimodal novelty cerebral regions, enhancing acoustic novelty processing in a potentially harming environment.
2012-01-01
Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306
Balconi, Michela; Vanutelli, Maria Elide
2016-01-01
The present research explored the effect of cross-modal integration of emotional cues (auditory and visual (AV)) compared with only visual (V) emotional cues in observing interspecies interactions. The brain activity was monitored when subjects processed AV and V situations, which represented an emotional (positive or negative), interspecies (human-animal) interaction. Congruence (emotionally congruous or incongruous visual and auditory patterns) was also modulated. electroencephalography brain oscillations (from delta to beta) were analyzed and the cortical source localization (by standardized Low Resolution Brain Electromagnetic Tomography) was applied to the data. Frequency band (mainly low-frequency delta and theta) showed a significant brain activity increasing in response to negative compared to positive interactions within the right hemisphere. Moreover, differences were found based on stimulation type, with an increased effect for AV compared with V. Finally, delta band supported a lateralized right dorsolateral prefrontal cortex (DLPFC) activity in response to negative and incongruous interspecies interactions, mainly for AV. The contribution of cross-modality, congruence (incongruous patterns), and lateralization (right DLPFC) in response to interspecies emotional interactions was discussed at light of a "negative lateralized effect."
Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise
Ioannou, Christos I.; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep
2015-01-01
The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies. PMID:26065708
Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.
Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep
2015-01-01
The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.
Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.
Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd
2014-11-01
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms. Copyright © 2014 Elsevier Inc. All rights reserved.
Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex.
Kajikawa, Yoshinao; Smiley, John F; Schroeder, Charles E
2017-10-18
Prior studies have reported "local" field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be "contaminated" by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such "far-field" activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is problematic as the default assumption is that FPs originate from local activity, and thus are termed "local" (LFP). We examine this general problem in the context of previously reported face-evoked FPs in macaque auditory cortex. Our findings suggest that face-FPs are indeed generated in the underlying inferotemporal cortex and volume-conducted to the auditory cortex. The note of caution raised by these findings is of particular importance for studies that seek to assign FP/LFP recordings to specific cortical layers. Copyright © 2017 the authors 0270-6474/17/3710139-15$15.00/0.
Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex
Smiley, John F.; Schroeder, Charles E.
2017-01-01
Prior studies have reported “local” field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be “contaminated” by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such “far-field” activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is problematic as the default assumption is that FPs originate from local activity, and thus are termed “local” (LFP). We examine this general problem in the context of previously reported face-evoked FPs in macaque auditory cortex. Our findings suggest that face-FPs are indeed generated in the underlying inferotemporal cortex and volume-conducted to the auditory cortex. The note of caution raised by these findings is of particular importance for studies that seek to assign FP/LFP recordings to specific cortical layers. PMID:28924008
Brain Responses to Lexical-Semantic Priming in Children At-Risk for Dyslexia
ERIC Educational Resources Information Center
Torkildsen, Janne von Koss; Syversen, Gro; Simonsen, Hanne Gram; Moen, Inger; Lindgren, Magnus
2007-01-01
Deviances in early event-related potential (ERP) components reflecting auditory and phonological processing are well-documented in children at familial risk for dyslexia. However, little is known about brain responses which index processing in other linguistic domains such as lexicon, semantics and syntax in this group. The present study…
Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.
Woolley, Sarah M N; Portfors, Christine V
2013-11-01
The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Representations of Pitch and Timbre Variation in Human Auditory Cortex
2017-01-01
Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions. PMID:28025255
Neural Entrainment to Auditory Imagery of Rhythms.
Okawa, Haruki; Suefusa, Kaori; Tanaka, Toshihisa
2017-01-01
A method of reconstructing perceived or imagined music by analyzing brain activity has not yet been established. As a first step toward developing such a method, we aimed to reconstruct the imagery of rhythm, which is one element of music. It has been reported that a periodic electroencephalogram (EEG) response is elicited while a human imagines a binary or ternary meter on a musical beat. However, it is not clear whether or not brain activity synchronizes with fully imagined beat and meter without auditory stimuli. To investigate neural entrainment to imagined rhythm during auditory imagery of beat and meter, we recorded EEG while nine participants (eight males and one female) imagined three types of rhythm without auditory stimuli but with visual timing, and then we analyzed the amplitude spectra of the EEG. We also recorded EEG while the participants only gazed at the visual timing as a control condition to confirm the visual effect. Furthermore, we derived features of the EEG using canonical correlation analysis (CCA) and conducted an experiment to individually classify the three types of imagined rhythm from the EEG. The results showed that classification accuracies exceeded the chance level in all participants. These results suggest that auditory imagery of meter elicits a periodic EEG response that changes at the imagined beat and meter frequency even in the fully imagined conditions. This study represents the first step toward the realization of a method for reconstructing the imagined music from brain activity.
Neural basis of processing threatening voices in a crowded auditory world
Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.
2016-01-01
In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543
Samson, F; Zeffiro, T A; Doyon, J; Benali, H; Mottron, L
2015-09-01
A continuum of phenotypes makes up the autism spectrum (AS). In particular, individuals show large differences in language acquisition, ranging from precocious speech to severe speech onset delay. However, the neurological origin of this heterogeneity remains unknown. Here, we sought to determine whether AS individuals differing in speech acquisition show different cortical responses to auditory stimulation and morphometric brain differences. Whole-brain activity following exposure to non-social sounds was investigated. Individuals in the AS were classified according to the presence or absence of Speech Onset Delay (AS-SOD and AS-NoSOD, respectively) and were compared with IQ-matched typically developing individuals (TYP). AS-NoSOD participants displayed greater task-related activity than TYP in the inferior frontal gyrus and peri-auditory middle and superior temporal gyri, which are associated with language processing. Conversely, the AS-SOD group only showed enhanced activity in the vicinity of the auditory cortex. We detected no differences in brain structure between groups. This is the first study to demonstrate the existence of differences in functional brain activity between AS individuals divided according to their pattern of speech development. These findings support the Trigger-threshold-target model and indicate that the occurrence of speech onset delay in AS individuals depends on the location of cortical functional reallocation, which favors perception in AS-SOD and language in AS-NoSOD. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System
Anderson, Lucy A.
2016-01-01
High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621
Auditory brainstem response to complex sounds: a tutorial
Skoe, Erika; Kraus, Nina
2010-01-01
This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007
Auditory Attraction: Activation of Visual Cortex by Music and Sound in Williams Syndrome
ERIC Educational Resources Information Center
Thornton-Wells, Tricia A.; Cannistraci, Christopher J.; Anderson, Adam W.; Kim, Chai-Youn; Eapen, Mariam; Gore, John C.; Blake, Randolph; Dykens, Elisabeth M.
2010-01-01
Williams syndrome is a genetic neurodevelopmental disorder with a distinctive phenotype, including cognitive-linguistic features, nonsocial anxiety, and a strong attraction to music. We performed functional MRI studies examining brain responses to musical and other types of auditory stimuli in young adults with Williams syndrome and typically…
Auditory Habituation in the Fetus and Neonate: An fMEG Study
ERIC Educational Resources Information Center
Muenssinger, Jana; Matuz, Tamara; Schleger, Franziska; Kiefer-Schmidt, Isabelle; Goelz, Rangmar; Wacker-Gussmann, Annette; Birbaumer, Niels; Preissl, Hubert
2013-01-01
Habituation--the most basic form of learning--is used to evaluate central nervous system (CNS) maturation and to detect abnormalities in fetal brain development. In the current study, habituation, stimulus specificity and dishabituation of auditory evoked responses were measured in fetuses and newborns using fetal magnetoencephalography (fMEG). An…
Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M
2013-01-01
Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.
Selective Neuronal Activation by Cochlear Implant Stimulation in Auditory Cortex of Awake Primate
Johnson, Luke A.; Della Santina, Charles C.
2016-01-01
Despite the success of cochlear implants (CIs) in human populations, most users perform poorly in noisy environments and music and tonal language perception. How CI devices engage the brain at the single neuron level has remained largely unknown, in particular in the primate brain. By comparing neuronal responses with acoustic and CI stimulation in marmoset monkeys unilaterally implanted with a CI electrode array, we discovered that CI stimulation was surprisingly ineffective at activating many neurons in auditory cortex, particularly in the hemisphere ipsilateral to the CI. Further analyses revealed that the CI-nonresponsive neurons were narrowly tuned to frequency and sound level when probed with acoustic stimuli; such neurons likely play a role in perceptual behaviors requiring fine frequency and level discrimination, tasks that CI users find especially challenging. These findings suggest potential deficits in central auditory processing of CI stimulation and provide important insights into factors responsible for poor CI user performance in a wide range of perceptual tasks. SIGNIFICANCE STATEMENT The cochlear implant (CI) is the most successful neural prosthetic device to date and has restored hearing in hundreds of thousands of deaf individuals worldwide. However, despite its huge successes, CI users still face many perceptual limitations, and the brain mechanisms involved in hearing through CI devices remain poorly understood. By directly comparing single-neuron responses to acoustic and CI stimulation in auditory cortex of awake marmoset monkeys, we discovered that neurons unresponsive to CI stimulation were sharply tuned to frequency and sound level. Our results point out a major deficit in central auditory processing of CI stimulation and provide important insights into mechanisms underlying the poor CI user performance in a wide range of perceptual tasks. PMID:27927962
Auditory brain development in premature infants: the importance of early experience.
McMahon, Erin; Wintermark, Pia; Lahav, Amir
2012-04-01
Preterm infants in the neonatal intensive care unit (NICU) often close their eyes in response to bright lights, but they cannot close their ears in response to loud sounds. The sudden transition from the womb to the overly noisy world of the NICU increases the vulnerability of these high-risk newborns. There is a growing concern that the excess noise typically experienced by NICU infants disrupts their growth and development, putting them at risk for hearing, language, and cognitive disabilities. Preterm neonates are especially sensitive to noise because their auditory system is at a critical period of neurodevelopment, and they are no longer shielded by maternal tissue. This paper discusses the developmental milestones of the auditory system and suggests ways to enhance the quality control and type of sounds delivered to NICU infants. We argue that positive auditory experience is essential for early brain maturation and may be a contributing factor for healthy neurodevelopment. Further research is needed to optimize the hospital environment for preterm newborns and to increase their potential to develop into healthy children. © 2012 New York Academy of Sciences.
Hunter, Lisa L; Blankenship, Chelsea M; Gunter, Rebekah G; Keefe, Douglas H; Feeney, M Patrick; Brown, David K; Baroch, Kelly
2018-05-01
Examination of cochlear and neural potentials is necessary to assess sensory and neural status in infants, especially those cared for in neonatal intensive care units (NICU) who have high rates of hyperbilirubinemia and thus are at risk for auditory neuropathy (AN). The purpose of this study was to determine whether recording parameters commonly used in click-evoked auditory brain stem response (ABR) are useful for recording cochlear microphonic (CM) and Wave I in infants at risk for AN. Specifically, we analyzed CM, summating potential (SP), and Waves I, III, and V. The overall aim was to compare latencies and amplitudes of evoked responses in infants cared for in NICUs with infants in a well-baby nursery (WBN), both of which passed newborn hearing screening. This is a prospective study in which infants who passed ABR newborn hearing screening were grouped based on their birth history (WBN and NICU). All infants had normal hearing status when tested with diagnostic ABR at about one month of age, corrected for prematurity. Thirty infants (53 ears) from the WBN [mean corrected age at test = 5.0 weeks (wks.)] and thirty-two infants (59 ears) from the NICU (mean corrected age at test = 5.7 wks.) with normal hearing were included in this study. In addition, two infants were included as comparative case studies, one that was diagnosed with AN and another case that was diagnosed with bilateral sensorineural hearing loss (SNHL). Diagnostic ABR, including click and tone-burst air- and bone-conduction stimuli were recorded. Peak Waves I, III, and V; SP; and CM latency and amplitude (peak to trough) were measured to determine if there were differences in ABR and electrocochleography (ECochG) variables between WBN and NICU infants. No significant group differences were found between WBN and NICU groups for ABR waveforms, CM, or SP, including amplitude and latency values. The majority (75%) of the NICU group had hyperbilirubinemia, but overall, they did not show evidence of effects in their ECochG or ABR responses when tested at about one-month corrected age. These data may serve as a normative sample for NICU and well infant ECochG and ABR latencies at one-month corrected age. Two infant case studies, one diagnosed with AN and another with SNHL demonstrated the complexity of using ECochG and otoacoustic emissions to assess the risk of AN in individual cases. CM and SPs can be readily measured using standard click stimuli in both well and NICU infants. Normative ranges for latency and amplitude are useful for interpreting ECochG and ABR components. Inclusion of ECochG and ABR tests in a test battery that also includes otoacoustic emission and acoustic reflex tests may provide a more refined assessment of the risks of AN and SNHL in infants. American Academy of Audiology.
Characterization of auditory synaptic inputs to gerbil perirhinal cortex
Kotak, Vibhakar C.; Mowery, Todd M.; Sanes, Dan H.
2015-01-01
The representation of acoustic cues involves regions downstream from the auditory cortex (ACx). One such area, the perirhinal cortex (PRh), processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG) and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of type A gamma-aminobutyric acid (GABA-A) receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the ACx. PMID:26321918
The human auditory evoked response
NASA Technical Reports Server (NTRS)
Galambos, R.
1974-01-01
Figures are presented of computer-averaged auditory evoked responses (AERs) that point to the existence of a completely endogenous brain event. A series of regular clicks or tones was administered to the ear, and 'odd-balls' of different intensity or frequency respectively were included. Subjects were asked either to ignore the sounds (to read or do something else) or to attend to the stimuli. When they listened and counted the odd-balls, a P3 wave occurred at 300msec after stimulus. When the odd-balls consisted of omitted clicks or tone bursts, a similar response was observed. This could not have come from auditory nerve, but only from cortex. It is evidence of recognition, a conscious process.
Functional MRI of the vocalization-processing network in the macaque brain
Ortiz-Rios, Michael; Kuśmierek, Paweł; DeWitt, Iain; Archakov, Denis; Azevedo, Frederico A. C.; Sams, Mikko; Jääskeläinen, Iiro P.; Keliris, Georgios A.; Rauschecker, Josef P.
2015-01-01
Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt. PMID:25883546
Brain Metabolism during Hallucination-Like Auditory Stimulation in Schizophrenia
Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C.; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard
2014-01-01
Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia. PMID:24416328
Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio
2016-01-01
The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.
Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio
2012-01-01
Approximately 2-4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs.
Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio
2012-01-01
Background Approximately 2–4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. Methodology/Principal Findings A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). Conclusions/Significance This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs. PMID:22808289
Neural correlates of audiovisual integration in music reading.
Nichols, Emily S; Grahn, Jessica A
2016-10-01
Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Auditory fear conditioning modifies steady-state evoked potentials in the rat inferior colliculus.
Lockmann, André Luiz Vieira; Mourão, Flávio Afonso Gonçalves; Moraes, Marcio Flávio Dutra
2017-08-01
The rat inferior colliculus (IC) is a major midbrain relay for ascending inputs from the auditory brain stem and has been suggested to play a key role in the processing of aversive sounds. Previous studies have demonstrated that auditory fear conditioning (AFC) potentiates transient responses to brief tones in the IC, but it remains unexplored whether AFC modifies responses to sustained periodic acoustic stimulation-a type of response called the steady-state evoked potential (SSEP). Here we used an amplitude-modulated tone-a 10-kHz tone with a sinusoidal amplitude modulation of 53.7 Hz-as the conditioning stimulus (CS) in an AFC protocol (5 CSs per day in 3 consecutive days) while recording local field potentials (LFPs) from the IC. In the preconditioning session ( day 1 ), the CS elicited prominent 53.7-Hz SSEPs. In the training session ( day 2 ), foot shocks occurred at the end of each CS (paired group) or randomized in the inter-CS interval (unpaired group). In the test session ( day 3 ), SSEPs markedly differed from preconditioning in the paired group: in the first two trials the phase to which the SSEP coupled to the CS amplitude envelope shifted ~90°; in the last two trials the SSEP power and the coherence of SSEP with the CS amplitude envelope increased. LFP power decreased in frequency bands other than 53.7 Hz. In the unpaired group, SSEPs did not change in the test compared with preconditioning. Our results show that AFC causes dissociated changes in the phase and power of SSEP in the IC. NEW & NOTEWORTHY Local field potential oscillations in the inferior colliculus follow the amplitude envelope of an amplitude-modulated tone, originating a neural response called the steady-state evoked potential. We show that auditory fear conditioning of an amplitude-modulated tone modifies two parameters of the steady-state evoked potentials in the inferior colliculus: first the phase to which the evoked oscillation couples to the amplitude-modulated tone shifts; subsequently, the evoked oscillation power increases along with its coherence with the amplitude-modulated tone. Copyright © 2017 the American Physiological Society.
Male and female voices activate distinct regions in the male brain.
Sokhi, Dilraj S; Hunter, Michael D; Wilkinson, Iain D; Woodruff, Peter W R
2005-09-01
In schizophrenia, auditory verbal hallucinations (AVHs) are likely to be perceived as gender-specific. Given that functional neuro-imaging correlates of AVHs involve multiple brain regions principally including auditory cortex, it is likely that those brain regions responsible for attribution of gender to speech are invoked during AVHs. We used functional magnetic resonance imaging (fMRI) and a paradigm utilising 'gender-apparent' (unaltered) and 'gender-ambiguous' (pitch-scaled) male and female voice stimuli to test the hypothesis that male and female voices activate distinct brain areas during gender attribution. The perception of female voices, when compared with male voices, affected greater activation of the right anterior superior temporal gyrus, near the superior temporal sulcus. Similarly, male voice perception activated the mesio-parietal precuneus area. These different gender associations could not be explained by either simple pitch perception or behavioural response because the activations that we observed were conjointly activated by both 'gender-apparent' and 'gender-ambiguous' voices. The results of this study demonstrate that, in the male brain, the perception of male and female voices activates distinct brain regions.
[Prospective study with auditory evoked potentials of the brain stem in children at risk].
Navarro Rivero, B; González Díaz, E; Marrero Santos, L; Martínez Toledano, I; Murillo Díaz, M J; Valiño Colás, M J
1999-04-01
The aim of this study was to evaluate methods of hypoacusis screening. The early detection of audition problems is vital for quick rehabilitation. For this reason, resting on the criteria of the Comisión Española para la Detección Precoz de la Hipoacusia (Spanish Commission for the Early Detection of Hypoacusis), we have carried out a prospective study, from January to May 1998, evaluating patients at risk of suffering from hypoacusis. The study included 151 patients with ages between birth and 14 years. Medical records and brainstem auditory evoked responses (BAER) were carried out. The most common reason for requesting a consultation for the 151 patients included in our study was the suspicion of hypoacusis. Seventy-one (47%) presented pathological BAER, 37 of them were bilateral. In most cases the loss of audition was of cochlear origin, with 11 patients having a serious deafness, 4 with bilateral affection (3 suspicious of hypoacusis and 1 of hyperbilirubinemia) and 7 unilateral deafness. BAER is a good screening method for children at risk. It is an innocuous, objective and specific test that does not require the patient's collaboration. The level of positives is high (47%).
Regulation of body temperature in the blue-tongued lizard.
Hammel, H T; Caldwell, F T; Abrams, R M
1967-06-02
Lizards (Tiliqua scincoides) regulated their internal body temperature by moving back and forth between 15 degrees and 45 degrees C environments to maintain colonic and brain temperatures between 30 degrees and 37 degrees C. A pair of thermodes were implanted across the preoptic region of the brain stem, and a reentrant tube for a thermocouple was implanted in the brain stem. Heating the brain stem to 41 degrees C activated the exit response from the hot environment at a colonic temperature 1 degrees to 2 degrees C lower than normal, whereas cooling the brain stem to 25 degrees C delayed the exit from the hot environment until the colonic temperature was 1 degrees to 2 degrees C higher than normal. The behavioral thermoregulatory responses of this ectotherm appear to be activated by a combination of hypothalamic and other body temperatures.
Jacobsen, Leslie K; Slotkin, Theodore A; Mencl, W Einar; Frost, Stephen J; Pugh, Kenneth R
2007-12-01
Prenatal exposure to active maternal tobacco smoking elevates risk of cognitive and auditory processing deficits, and of smoking in offspring. Recent preclinical work has demonstrated a sex-specific pattern of reduction in cortical cholinergic markers following prenatal, adolescent, or combined prenatal and adolescent exposure to nicotine, the primary psychoactive component of tobacco smoke. Given the importance of cortical cholinergic neurotransmission to attentional function, we examined auditory and visual selective and divided attention in 181 male and female adolescent smokers and nonsmokers with and without prenatal exposure to maternal smoking. Groups did not differ in age, educational attainment, symptoms of inattention, or years of parent education. A subset of 63 subjects also underwent functional magnetic resonance imaging while performing an auditory and visual selective and divided attention task. Among females, exposure to tobacco smoke during prenatal or adolescent development was associated with reductions in auditory and visual attention performance accuracy that were greatest in female smokers with prenatal exposure (combined exposure). Among males, combined exposure was associated with marked deficits in auditory attention, suggesting greater vulnerability of neurocircuitry supporting auditory attention to insult stemming from developmental exposure to tobacco smoke in males. Activation of brain regions that support auditory attention was greater in adolescents with prenatal or adolescent exposure to tobacco smoke relative to adolescents with neither prenatal nor adolescent exposure to tobacco smoke. These findings extend earlier preclinical work and suggest that, in humans, prenatal and adolescent exposure to nicotine exerts gender-specific deleterious effects on auditory and visual attention, with concomitant alterations in the efficiency of neurocircuitry supporting auditory attention.
Albouy, Philippe; Mattout, Jérémie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaëtan; Aguera, Pierre-Emmanuel; Daligault, Sébastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara
2013-05-01
Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a melodic contour task and an easier transposition task); they had to indicate whether sequences of six tones (presented in pairs) were the same or different. Behavioural data indicated that in comparison with control participants, amusics' short-term memory was impaired for the melodic contour task, but not for the transposition task. The major finding was that pitch processing and short-term memory deficits can be traced down to amusics' early brain responses during encoding of the melodic information. Temporal and frontal generators of the N100m evoked by each note of the melody were abnormally recruited in the amusic brain. Dynamic causal modelling of the N100m further revealed decreased intrinsic connectivity in both auditory cortices, increased lateral connectivity between auditory cortices as well as a decreased right fronto-temporal backward connectivity in amusics relative to control subjects. Abnormal functioning of this fronto-temporal network was also shown during the retention interval and the retrieval of melodic information. In particular, induced gamma oscillations in right frontal areas were decreased in amusics during the retention interval. Using voxel-based morphometry, we confirmed morphological brain anomalies in terms of white and grey matter concentration in the right inferior frontal gyrus and the right superior temporal gyrus in the amusic brain. The convergence between functional and structural brain differences strengthens the hypothesis of abnormalities in the fronto-temporal pathway of the amusic brain. Our data provide first evidence of altered functioning of the auditory cortices during pitch perception and memory in congenital amusia. They further support the hypothesis that in neurodevelopmental disorders impacting high-level functions (here musical abilities), abnormalities in cerebral processing can be observed in early brain responses.
Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis
Fletcher, Phillip D.; Downey, Laura E.; Golden, Hannah L.; Clark, Camilla N.; Slattery, Catherine F.; Paterson, Ross W.; Schott, Jonathan M.; Rohrer, Jonathan D.; Rossor, Martin N.; Warren, Jason D.
2015-01-01
Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music (‘musicophilia’) occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717
Sitek, Kevin R; Cai, Shanqing; Beal, Deryk S; Perkell, Joseph S; Guenther, Frank H; Ghosh, Satrajit S
2016-01-01
Persistent developmental stuttering is characterized by speech production disfluency and affects 1% of adults. The degree of impairment varies widely across individuals and the neural mechanisms underlying the disorder and this variability remain poorly understood. Here we elucidate compensatory mechanisms related to this variability in impairment using whole-brain functional and white matter connectivity analyses in persistent developmental stuttering. We found that people who stutter had stronger functional connectivity between cerebellum and thalamus than people with fluent speech, while stutterers with the least severe symptoms had greater functional connectivity between left cerebellum and left orbitofrontal cortex (OFC). Additionally, people who stutter had decreased functional and white matter connectivity among the perisylvian auditory, motor, and speech planning regions compared to typical speakers, but greater functional connectivity between the right basal ganglia and bilateral temporal auditory regions. Structurally, disfluency ratings were negatively correlated with white matter connections to left perisylvian regions and to the brain stem. Overall, we found increased connectivity among subcortical and reward network structures in people who stutter compared to controls. These connections were negatively correlated with stuttering severity, suggesting the involvement of cerebellum and OFC may underlie successful compensatory mechanisms by more fluent stutterers.
A neural network model of normal and abnormal auditory information processing.
Du, X; Jansen, B H
2011-08-01
The ability of the brain to attenuate the response to irrelevant sensory stimulation is referred to as sensory gating. A gating deficiency has been reported in schizophrenia. To study the neural mechanisms underlying sensory gating, a neuroanatomically inspired model of auditory information processing has been developed. The mathematical model consists of lumped parameter modules representing the thalamus (TH), the thalamic reticular nucleus (TRN), auditory cortex (AC), and prefrontal cortex (PC). It was found that the membrane potential of the pyramidal cells in the PC module replicated auditory evoked potentials, recorded from the scalp of healthy individuals, in response to pure tones. Also, the model produced substantial attenuation of the response to the second of a pair of identical stimuli, just as seen in actual human experiments. We also tested the viewpoint that schizophrenia is associated with a deficit in prefrontal dopamine (DA) activity, which would lower the excitatory and inhibitory feedback gains in the AC and PC modules. Lowering these gains by less than 10% resulted in model behavior resembling the brain activity seen in schizophrenia patients, and replicated the reported gating deficits. The model suggests that the TRN plays a critical role in sensory gating, with the smaller response to a second tone arising from a reduction in inhibition of TH by the TRN. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Brain System for Auditory Working Memory.
Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D
2016-04-20
The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.
Brain stem auditory potentials evoked by clicks in the presence of high-pass filtered noise in dogs.
Poncelet, L; Deltenre, P; Coppens, A; Michaux, C; Coussart, E
2006-04-01
This study evaluates the effects of a high-frequency hearing loss simulated by the high-pass-noise masking method, on the click-evoked brain stem-evoked potentials (BAEP) characteristics in dogs. BAEP were obtained in response to rarefaction and condensation click stimuli from 60 dB normal hearing level (NHL, corresponding to 89 dB sound pressure level) to wave V threshold, using steps of 5 dB in eleven 58 to 80-day-old Beagle puppies. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation potential (RCDP). The procedure was repeated while constant level, high-pass filtered (HPF) noise was superposed to the click. Cut-off frequencies of the successively used filters were 8, 4, 2 and 1 kHz. For each condition, wave V and RCDP thresholds, and slope of the wave V latency-intensity curve (LIC) were collected. The intensity range at which RCDP could not be recorded (pre-RCDP range) was calculated. Compared with the no noise condition, the pre-RCDP range significantly diminished and the wave V threshold significantly increased when the superposed HPF noise reached the 4 kHz area. Wave V LIC slope became significantly steeper with the 2 kHz HPF noise. In this non-invasive model of high-frequency hearing loss, impaired hearing of frequencies from 8 kHz and above escaped detection through click BAEP study in dogs. Frequencies above 13 kHz were however not specifically addressed in this study.
Early auditory processing in musicians and dancers during a contemporary dance piece
Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari
2016-01-01
The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other. PMID:27611929
The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2016-02-03
Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.
Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.
Rutkowski, Tomasz M; Mori, Hiromu
2015-04-15
The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.
Habituation deficit of auditory N100m in patients with fibromyalgia.
Choi, W; Lim, M; Kim, J S; Chung, C K
2016-11-01
Habituation refers to the brain's inhibitory mechanism against sensory overload and its brain correlate has been investigated in the form of a well-defined event-related potential, N100 (N1). Fibromyalgia is an extensively described chronic pain syndrome with concurrent manifestations of reduced tolerance and enhanced sensation of painful and non-painful stimulation, suggesting an association with central amplification of all sensory domains. Among diverse sensory modalities, we utilized repetitive auditory stimulation to explore the anomalous sensory information processing in fibromyalgia as evidenced by N1 habituation. Auditory N1 was assessed in 19 fibromyalgia patients and age-, education- and gender-matched 21 healthy control subjects under the duration-deviant passive oddball paradigm and magnetoencephalography recording. The brain signal of the first standard stimulus (following each deviant) and last standard stimulus (preceding each deviant) were analysed to identify N1 responses. N1 amplitude difference and adjusted amplitude ratio were computed as habituation indices. Fibromyalgia patients showed lower N1 amplitude difference (left hemisphere: p = 0.004; right hemisphere: p = 0.034) and adjusted N1 amplitude ratio (left hemisphere: p = 0.001; right hemisphere: p = 0.052) than healthy control subjects, indicating deficient auditory habituation. Further, augmented N1 amplitude pattern (p = 0.029) during the stimulus repetition was observed in fibromyalgia patients. Fibromyalgia patients failed to demonstrate auditory N1 habituation to repetitively presenting stimuli, which indicates their compromised early auditory information processing. Our findings provide neurophysiological evidence of inhibitory failure and cortical augmentation in fibromyalgia. WHAT'S ALREADY KNOWN ABOUT THIS TOPIC?: Fibromyalgia has been associated with altered filtering of irrelevant somatosensory input. However, whether this abnormality can extend to the auditory sensory system remains controversial. N!00, an event-related potential, has been widely utilized to assess the brain's habituation capacity against sensory overload. WHAT DOES THIS STUDY ADD?: Fibromyalgia patients showed defect in N100 habituation to repetitive auditory stimuli, indicating compromised early auditory functioning. This study identified deficient inhibitory control over irrelevant auditory stimuli in fibromyalgia. © 2016 European Pain Federation - EFIC®.
Mohebbi, Mehrnaz; Mahmoudian, Saeid; Alborzi, Marzieh Sharifian; Najafi-Koopaie, Mojtaba; Farahani, Ehsan Darestani; Farhadi, Mohammad
2014-09-01
To investigate the association of handedness with auditory middle latency responses (AMLRs) using topographic brain mapping by comparing amplitudes and latencies in frontocentral and hemispheric regions of interest (ROIs). The study included 44 healthy subjects with normal hearing (22 left handed and 22 right handed). AMLRs were recorded from 29 scalp electrodes in response to binaural 4-kHz tone bursts. Frontocentral ROI comparisons revealed that Pa and Pb amplitudes were significantly larger in the left-handed than the right-handed group. Topographic brain maps showed different distributions in AMLR components between the two groups. In hemispheric comparisons, Pa amplitude differed significantly across groups. A left-hemisphere emphasis of Pa was found in the right-handed group but not in the left-handed group. This study provides evidence that handedness is associated with AMLR components in frontocentral and hemispheric ROI. Handedness should be considered an essential factor in the clinical or experimental use of AMLRs.
Draganova, R; Schollbach, A; Schleger, F; Braendle, J; Brucker, S; Abele, H; Kagan, K O; Wallwiener, D; Fritsche, A; Eswaran, H; Preissl, H
2018-06-01
The human fetal auditory system is functional around the 25th week of gestational age when the thalamocortical connections are established. Fetal magnetoencephalography (fMEG) provides evidence for fetal auditory brain responses to pure tones and syllables. Fifty-five pregnant women between 31 and 40 weeks of gestation were included in the study. Fetal MEG was recorded during the presentation of an amplitude modulated tone (AM) with a carrier frequency of 500 Hz to the maternal abdomen modulated by low modulation rates (MRs) - 2/s and 4/s, middle MR - 8/s and high MRs - 27/s, 42/s, 78/s and 91/s. The aim was to determine whether the fetal brain responds differently to envelope slopes and intensity change at the onset of the AM sounds. A significant decrease of the response latencies of transient event-related responses (ERR) to high and middle MRs in comparison to the low MRs was observed. The highest fetal response rate was achieved by modulation rates of 2/s, 4/s and 27/s (70%, 57%, and 86%, respectively). Additionally, a maturation effect of the ERR (response latency vs. gestational age) was observed only for 4/s MR. The significant difference between the response latencies to low, middle, and high MRs suggests that still before birth the fetal brain processes the sound slopes at the onset in different integration time-windows, depending on the time for the intensity increase or stimulus power density at the onset, which is a prerequisite for language acquisition. Copyright © 2018 Elsevier B.V. All rights reserved.
Lense, Miriam D.; Shivers, Carolyn M.; Dykens, Elisabeth M.
2013-01-01
Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia. PMID:23966965
Schabus, Manuel; Dang-Vu, Thien Thanh; Heib, Dominik Philip Johannes; Boly, Mélanie; Desseilles, Martin; Vandewalle, Gilles; Schmidt, Christina; Albouy, Geneviève; Darsaud, Annabelle; Gais, Steffen; Degueldre, Christian; Balteau, Evelyne; Phillips, Christophe; Luxen, André; Maquet, Pierre
2012-01-01
The present study aimed at identifying the neurophysiological responses associated with auditory stimulation during non-rapid eye movement (NREM) sleep using simultaneous electroencephalography (EEG)/functional magnetic resonance imaging (fMRI) recordings. It was reported earlier that auditory stimuli produce bilateral activation in auditory cortex, thalamus, and caudate during both wakefulness and NREM sleep. However, due to the spontaneous membrane potential fluctuations cortical responses may be highly variable during NREM. Here we now examine the modulation of cerebral responses to tones depending on the presence or absence of sleep spindles and the phase of the slow oscillation. Thirteen healthy young subjects were scanned successfully during stage 2-4 NREM sleep in the first half of the night in a 3 T scanner. Subjects were not sleep-deprived and sounds were post hoc classified according to (i) the presence of sleep spindles or (ii) the phase of the slow oscillation during (±300 ms) tone delivery. These detected sounds were then entered as regressors of interest in fMRI analyses. Interestingly wake-like responses - although somewhat altered in size and location - persisted during NREM sleep, except during present spindles (as previously published in Dang-Vu et al., 2011) and the negative going phase of the slow oscillation during which responses became less consistent or even absent. While the phase of the slow oscillation did not alter brain responses in primary sensory cortex, it did modulate responses at higher cortical levels. In addition EEG analyses show a distinct N550 response to tones during the presence of light sleep spindles and suggest that in deep NREM sleep the brain is more responsive during the positive going slope of the slow oscillation. The presence of short temporal windows during which the brain is open to external stimuli is consistent with the fact that even during deep sleep meaningful events can be detected. Altogether, our results emphasize the notion that spontaneous fluctuations of brain activity profoundly modify brain responses to external information across all behavioral states, including deep NREM sleep.
A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie.
Hanke, Michael; Baumgartner, Florian J; Ibe, Pierre; Kaule, Falko R; Pollmann, Stefan; Speck, Oliver; Zinke, Wolf; Stadler, Jörg
2014-01-01
Here we present a high-resolution functional magnetic resonance (fMRI) dataset - 20 participants recorded at high field strength (7 Tesla) during prolonged stimulation with an auditory feature film ("Forrest Gump"). In addition, a comprehensive set of auxiliary data (T1w, T2w, DTI, susceptibility-weighted image, angiography) as well as measurements to assess technical and physiological noise components have been acquired. An initial analysis confirms that these data can be used to study common and idiosyncratic brain response patterns to complex auditory stimulation. Among the potential uses of this dataset are the study of auditory attention and cognition, language and music perception, and social perception. The auxiliary measurements enable a large variety of additional analysis strategies that relate functional response patterns to structural properties of the brain. Alongside the acquired data, we provide source code and detailed information on all employed procedures - from stimulus creation to data analysis. In order to facilitate replicative and derived works, only free and open-source software was utilized.
Music training alters the course of adolescent auditory development.
Tierney, Adam T; Krizman, Jennifer; Kraus, Nina
2015-08-11
Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.
Music training alters the course of adolescent auditory development
Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina
2015-01-01
Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739
Demopoulos, Carly; Yu, Nina; Tripp, Jennifer; Mota, Nayara; Brandes-Aitken, Anne N.; Desai, Shivani S.; Hill, Susanna S.; Antovich, Ashley D.; Harris, Julia; Honma, Susanne; Mizuiri, Danielle; Nagarajan, Srikantan S.; Marco, Elysa J.
2017-01-01
This study compared magnetoencephalographic (MEG) imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18), those with sensory processing dysfunction (SPD; N = 13) who do not meet ASD criteria, and typically developing control (TDC; N = 19) participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain. PMID:28603492
Roshal, L M; Tzyb, A F; Pavlova, L N; Soushkevitch, G N; Semenova, J B; Javoronkov, L P; Kolganova, O I; Konoplyannikov, A G; Shevchuk, A S; Yujakov, V V; Karaseva, O V; Ivanova, T F; Chernyshova, T A; Konoplyannikova, O A; Bandurko, L N; Marey, M V; Sukhikh, G T
2009-07-01
We studied the effect of systemic transplantation of human stem cells from various tissues on cognitive functions of the brain in rats during the delayed period after experimental brain injury. Stem cells were shown to increase the efficacy of medical treatment with metabolic and symptomatic drugs for recovery of cognitive functions. They accelerated the formation of the conditioned defense response. Fetal neural stem cells had a stronger effect on some parameters of cognitive function 2 months after brain injury. The efficacy of bone marrow mesenchymal stem cells from adult humans or fetuses was higher 3 months after brain injury.
Neural plasticity expressed in central auditory structures with and without tinnitus
Roberts, Larry E.; Bosnyak, Daniel J.; Thompson, David C.
2012-01-01
Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To assess this assumption, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by electroencephalography (EEG) are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated (AM) sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR) and P2 transient response known to localize to primary and non-primary auditory cortex, respectively. P2 amplitude increased over training sessions equally in participants with tinnitus and in control subjects, suggesting normal remodeling of non-primary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls the phase delay between the 40-Hz response and stimulus waveforms reduced by about 10° over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects) did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not non-primary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected. PMID:22654738
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
Estrogenic modulation of auditory processing: a vertebrate comparison
Caras, Melissa L.
2013-01-01
Sex-steroid hormones are well-known regulators of vocal motor behavior in several organisms. A large body of evidence now indicates that these same hormones modulate processing at multiple levels of the ascending auditory pathway. The goal of this review is to provide a comparative analysis of the role of estrogens in vertebrate auditory function. Four major conclusions can be drawn from the literature: First, estrogens may influence the development of the mammalian auditory system. Second, estrogenic signaling protects the mammalian auditory system from noise- and age-related damage. Third, estrogens optimize auditory processing during periods of reproductive readiness in multiple vertebrate lineages. Finally, brain-derived estrogens can act locally to enhance auditory response properties in at least one avian species. This comparative examination may lead to a better appreciation of the role of estrogens in the processing of natural vocalizations and may provide useful insights toward alleviating auditory dysfunctions emanating from hormonal imbalances. PMID:23911849
A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion
Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon
2012-01-01
The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322
Takahashi, Kuniyuki; Hishida, Ryuichi; Kubota, Yamato; Kudoh, Masaharu; Takahashi, Sugata; Shibuki, Katsuei
2006-03-01
Functional brain imaging using endogenous fluorescence of mitochondrial flavoprotein is useful for investigating mouse cortical activities via the intact skull, which is thin and sufficiently transparent in mice. We applied this method to investigate auditory cortical plasticity regulated by acoustic environments. Normal mice of the C57BL/6 strain, reared in various acoustic environments for at least 4 weeks after birth, were anaesthetized with urethane (1.7 g/kg, i.p.). Auditory cortical images of endogenous green fluorescence in blue light were recorded by a cooled CCD camera via the intact skull. Cortical responses elicited by tonal stimuli (5, 10 and 20 kHz) exhibited mirror-symmetrical tonotopic maps in the primary auditory cortex (AI) and anterior auditory field (AAF). Depression of auditory cortical responses regarding response duration was observed in sound-deprived mice compared with naïve mice reared in a normal acoustic environment. When mice were exposed to an environmental tonal stimulus at 10 kHz for more than 4 weeks after birth, the cortical responses were potentiated in a frequency-specific manner in respect to peak amplitude of the responses in AI, but not for the size of the responsive areas. Changes in AAF were less clear than those in AI. To determine the modified synapses by acoustic environments, neural responses in cortical slices were investigated with endogenous fluorescence imaging. The vertical thickness of responsive areas after supragranular electrical stimulation was significantly reduced in the slices obtained from sound-deprived mice. These results suggest that acoustic environments regulate the development of vertical intracortical circuits in the mouse auditory cortex.
Spectral-temporal EEG dynamics of speech discrimination processing in infants during sleep.
Gilley, Phillip M; Uhler, Kristin; Watson, Kaylee; Yoshinaga-Itano, Christine
2017-03-22
Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language. Mismatch responses for three contrasts (non-speech, vowel, and consonant) were computed as a spectral-temporal probability function in 24 infants, and analyzed at the group level by a modified multidimensional scaling. Immediately following an onset gamma response (30-50 Hz), the emergence of a beta oscillation (12-30 Hz) was temporally coupled with a lower frequency theta oscillation (2-8 Hz). The spectral-temporal probability of this coupling effect relative to a subsequent theta modulation corresponds with discrimination difficulty for non-speech, vowel, and consonant contrast features. The theta modulation effect suggests that unexpected sounds are encoded as a probabilistic measure of surprise. These results support the notion that auditory discrimination is driven by the development of brain networks for predictive processing, and can be measured in infants during sleep. The results presented here have implications for the interpretation of discrimination as a probabilistic process, and may provide a basis for the development of single-subject and single-trial classification in a clinically useful context. An infant's brain is processing information about the environment and performing computations, even during sleep. These computations reflect subtle differences in acoustic feature processing that are necessary for language-learning. Results from this study suggest that brain responses to deviant sounds in an oddball paradigm follow a cascade of oscillatory modulations. This cascade begins with a gamma response that later emerges as a beta synchronization, which is temporally coupled with a theta modulation, and followed by a second, subsequent theta modulation. The difference in frequency and timing of the theta modulations appears to reflect a measure of surprise. These insights into the neurophysiological mechanisms of auditory discrimination provide a basis for exploring the clinically utility of the MMR TF and other auditory oddball responses.
Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain
2015-05-01
Psychophysiological evidence supports a music-language association, such that experience in one domain can impact processing required in the other domain. We investigated the bidirectionality of this association by measuring event-related potentials (ERPs) in native English-speaking musicians, native tone language (Cantonese) nonmusicians, and native English-speaking nonmusician controls. We tested the degree to which pitch expertise stemming from musicianship or tone language experience similarly enhances the neural encoding of auditory information necessary for speech and music processing. Early cortical discriminatory processing for music and speech sounds was characterized using the mismatch negativity (MMN). Stimuli included 'large deviant' and 'small deviant' pairs of sounds that differed minimally in pitch (fundamental frequency, F0; contrastive musical tones) or timbre (first formant, F1; contrastive speech vowels). Behavioural F0 and F1 difference limen tasks probed listeners' perceptual acuity for these same acoustic features. Musicians and Cantonese speakers performed comparably in pitch discrimination; only musicians showed an additional advantage on timbre discrimination performance and an enhanced MMN responses to both music and speech. Cantonese language experience was not associated with enhancements on neural measures, despite enhanced behavioural pitch acuity. These data suggest that while both musicianship and tone language experience enhance some aspects of auditory acuity (behavioural pitch discrimination), musicianship confers farther-reaching enhancements to auditory function, tuning both pitch and timbre-related brain processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ross, Deborah A.; Puñal, Vanessa M.; Agashe, Shruti; Dweck, Isaac; Mueller, Jerel; Grill, Warren M.; Wilson, Blake S.
2016-01-01
Understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5–80 μA, 100–300 Hz, n = 172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals' judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site compared with the reference frequency used in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site's response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency-tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated, and to provide a greater range of evoked percepts. SIGNIFICANCE STATEMENT Patients with hearing loss stemming from causes that interrupt the auditory pathway after the cochlea need a brain prosthetic to restore hearing. Recently, prosthetic stimulation in the human inferior colliculus (IC) was evaluated in a clinical trial. Thus far, speech understanding was limited for the subjects and this limitation is thought to be partly due to challenges in harnessing the sound frequency representation in the IC. Here, we tested the effects of IC stimulation in monkeys trained to report the sound frequencies they heard. Our results indicate that the IC can be used to introduce a range of frequency percepts and suggest that placement of a greater number of electrode contacts may improve the effectiveness of such implants. PMID:27147659
[Clinical diagnosis of Treacher Collins syndrome and the efficacy of using BAHA].
Wang, Y B; Chen, X W; Wang, P; Fan, X M; Fan, Y; Liu, Q; Gao, Z Q
2017-04-20
Objective: To evaluate the efficacy of soft or implanted BAHA in the patients of Treacher Collins syndrome(TCS). Method: Six patients of TCS were studied. The Teber scoring system was used to evaluate the deformity degree. The air and bone auditory thresholds were assessed by auditory brain stem response(ABR). The infant-toddler meaningful auditory integration scale(IT-MAIS) was used to assess the auditory development at three time levels: baseline,3 months and 6 months. The hearing threshold and speech recognition score were measured under unaided and aided conditions. Result: The average score of deformity degree was 14.0±0.6. The TCOF1 gene was tested in two patients. The bone conduction hearing thresholds of patients was(18.0±4.5)dBnHL and the air conduction hearing thresholds was (70.5±7.0)dBnHL. The IT-MAIS total, detection and perception scores were improved significantly after wearing softband BAHA and approached the normal level in the 2 patients under 2 years old. The hearing thresholds of 6 patients in unaided and softband BAHA conditions were(65.8±3.8)dBHL and (30.0±3.2)dBHL ( P <0.01) respectively, and 1 implanted BAHA was 15 dBHL. The speech recognition scores of 3 patients in unaided and softband BAHA conditions were(31.7±3.5)% and(86.0±1.7)%( P <0.05) respectively, and 1 implanted BAHA was 96%. Conclusion: Whenever the patient was diagnosed as TCS by the clinical manifestations and genetic testing, BAHA system could help to rehabilitate the hearing to a normal condition. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory
2017-01-01
Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom sound perception and potentially serve as an objective measure of central neural pathology. PMID:28604786
Broderick, Patricia A.; Rosenbaum, Taylor
2013-01-01
Cocaine is a psychostimulant in the pharmacological class of drugs called Local Anesthetics. Interestingly, cocaine is the only drug in this class that has a chemical formula comprised of a tropane ring and is, moreover, addictive. The correlation between tropane and addiction is well-studied. Another well-studied correlation is that between psychosis induced by cocaine and that psychosis endogenously present in the schizophrenic patient. Indeed, both of these psychoses exhibit much the same behavioral as well as neurochemical properties across species. Therefore, in order to study the link between schizophrenia and cocaine addiction, we used a behavioral paradigm called Acoustic Startle. We used this acoustic startle paradigm in female versus male Sprague-Dawley animals to discriminate possible sex differences in responses to startle. The startle method operates through auditory pathways in brain via a network of sensorimotor gating processes within auditory cortex, cochlear nuclei, inferior and superior colliculi, pontine reticular nuclei, in addition to mesocorticolimbic brain reward and nigrostriatal motor circuitries. This paper is the first to report sex differences to acoustic stimuli in Sprague-Dawley animals (Rattus norvegicus) although such gender responses to acoustic startle have been reported in humans (Swerdlow et al. 1997 [1]). The startle method monitors pre-pulse inhibition (PPI) as a measure of the loss of sensorimotor gating in the brain's neuronal auditory network; auditory deficiencies can lead to sensory overload and subsequently cognitive dysfunction. Cocaine addicts and schizophrenic patients as well as cocaine treated animals are reported to exhibit symptoms of defective PPI (Geyer et al., 2001 [2]). Key findings are: (a) Cocaine significantly reduced PPI in both sexes. (b) Females were significantly more sensitive than males; reduced PPI was greater in females than in males. (c) Physiological saline had no effect on startle in either sex. Thus, the data elucidate gender-specificity to the startle response in animals. Finally, preliminary studies show the effect of cocaine on acoustic startle in tandem with effects on estrous cycle. The data further suggest that hormones may play a role in these sex differences to acoustic startle reported herein. PMID:24961412
Decoding the neural signatures of emotions expressed through sound.
Sachs, Matthew E; Habibi, Assal; Damasio, Antonio; Kaplan, Jonas T
2018-07-01
Effective social functioning relies in part on the ability to identify emotions from auditory stimuli and respond appropriately. Previous studies have uncovered brain regions engaged by the affective information conveyed by sound. But some of the acoustical properties of sounds that express certain emotions vary remarkably with the instrument used to produce them, for example the human voice or a violin. Do these brain regions respond in the same way to different emotions regardless of the sound source? To address this question, we had participants (N = 38, 20 females) listen to brief audio excerpts produced by the violin, clarinet, and human voice, each conveying one of three target emotions-happiness, sadness, and fear-while brain activity was measured with fMRI. We used multivoxel pattern analysis to test whether emotion-specific neural responses to the voice could predict emotion-specific neural responses to musical instruments and vice-versa. A whole-brain searchlight analysis revealed that patterns of activity within the primary and secondary auditory cortex, posterior insula, and parietal operculum were predictive of the affective content of sound both within and across instruments. Furthermore, classification accuracy within the anterior insula was correlated with behavioral measures of empathy. The findings suggest that these brain regions carry emotion-specific patterns that generalize across sounds with different acoustical properties. Also, individuals with greater empathic ability have more distinct neural patterns related to perceiving emotions. These results extend previous knowledge regarding how the human brain extracts emotional meaning from auditory stimuli and enables us to understand and connect with others effectively. Copyright © 2018 Elsevier Inc. All rights reserved.
Primary and multisensory cortical activity is correlated with audiovisual percepts.
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven
2010-04-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Copyright 2009 Wiley-Liss, Inc.
Primary and Multisensory Cortical Activity is Correlated with Audiovisual Percepts
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven
2012-01-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040
Maturation of the auditory t-complex brain response across adolescence.
Mahajan, Yatin; McArthur, Genevieve
2013-02-01
Adolescence is a time of great change in the brain in terms of structure and function. It is possible to track the development of neural function across adolescence using auditory event-related potentials (ERPs). This study tested if the brain's functional processing of sound changed across adolescence. We measured passive auditory t-complex peaks to pure tones and consonant-vowel (CV) syllables in 90 children and adolescents aged 10-18 years, as well as 10 adults. Across adolescence, Na amplitude increased to tones and speech at the right, but not left, temporal site. Ta amplitude decreased at the right temporal site for tones, and at both sites for speech. The Tb remained constant at both sites. The Na and Ta appeared to mature later in the right than left hemisphere. The t-complex peaks Na and Tb exhibited left lateralization and Ta showed right lateralization. Thus, the functional processing of sound continued to develop across adolescence and into adulthood. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
What Brain Research Suggests for Teaching Reading Strategies
ERIC Educational Resources Information Center
Willis, Judy
2009-01-01
How the brain learns to read has been the subject of much neuroscience educational research. Evidence is mounting for identifiable networks of connected neurons that are particularly active during reading processes such as response to visual and auditory stimuli, relating new information to prior knowledge, long-term memory storage, comprehension,…
Vicario, David S.
2017-01-01
Sensory and motor brain structures work in collaboration during perception. To evaluate their respective contributions, the present study recorded neural responses to auditory stimulation at multiple sites simultaneously in both the higher-order auditory area NCM and the premotor area HVC of the songbird brain in awake zebra finches (Taeniopygia guttata). Bird’s own song (BOS) and various conspecific songs (CON) were presented in both blocked and shuffled sequences. Neural responses showed plasticity in the form of stimulus-specific adaptation, with markedly different dynamics between the two structures. In NCM, the response decrease with repetition of each stimulus was gradual and long-lasting and did not differ between the stimuli or the stimulus presentation sequences. In contrast, HVC responses to CON stimuli decreased much more rapidly in the blocked than in the shuffled sequence. Furthermore, this decrease was more transient in HVC than in NCM, as shown by differential dynamics in the shuffled sequence. Responses to BOS in HVC decreased more gradually than to CON stimuli. The quality of neural representations, computed as the mutual information between stimuli and neural activity, was higher in NCM than in HVC. Conversely, internal functional correlations, estimated as the coherence between recording sites, were greater in HVC than in NCM. The cross-coherence between the two structures was weak and limited to low frequencies. These findings suggest that auditory communication signals are processed according to very different but complementary principles in NCM and HVC, a contrast that may inform study of the auditory and motor pathways for human speech processing. NEW & NOTEWORTHY Neural responses to auditory stimulation in sensory area NCM and premotor area HVC of the songbird forebrain show plasticity in the form of stimulus-specific adaptation with markedly different dynamics. These two structures also differ in stimulus representations and internal functional correlations. Accordingly, NCM seems to process the individually specific complex vocalizations of others based on prior familiarity, while HVC responses appear to be modulated by transitions and/or timing in the ongoing sequence of sounds. PMID:28031398
Cooperative dynamics in auditory brain response
NASA Astrophysics Data System (ADS)
Kwapień, J.; DrożdŻ, S.; Liu, L. C.; Ioannides, A. A.
1998-11-01
Simultaneous estimates of activity in the left and right auditory cortex of five normal human subjects were extracted from multichannel magnetoencephalography recordings. Left, right, and binaural stimulations were used, in separate runs, for each subject. The resulting time series of left and right auditory cortex activity were analyzed using the concept of mutual information. The analysis constitutes an objective method to address the nature of interhemispheric correlations in response to auditory stimulations. The results provide clear evidence of the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20 ms, as can be seen in the average signal. The strength of the interhemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.
Encoding of frequency-modulation (FM) rates in human auditory cortex.
Okamoto, Hidehiko; Kakigi, Ryusuke
2015-12-14
Frequency-modulated sounds play an important role in our daily social life. However, it currently remains unclear whether frequency modulation rates affect neural activity in the human auditory cortex. In the present study, using magnetoencephalography, we investigated the auditory evoked N1m and sustained field responses elicited by temporally repeated and superimposed frequency-modulated sweeps that were matched in the spectral domain, but differed in frequency modulation rates (1, 4, 16, and 64 octaves per sec). The results obtained demonstrated that the higher rate frequency-modulated sweeps elicited the smaller N1m and the larger sustained field responses. Frequency modulation rate had a significant impact on the human brain responses, thereby providing a key for disentangling a series of natural frequency-modulated sounds such as speech and music.
Extrinsic Embryonic Sensory Stimulation Alters Multimodal Behavior and Cellular Activation
Markham, Rebecca G.; Shimizu, Toru; Lickliter, Robert
2009-01-01
Embryonic vision is generated and maintained by spontaneous neuronal activation patterns, yet extrinsic stimulation also sculpts sensory development. Because the sensory and motor systems are interconnected in embryogenesis, how extrinsic sensory activation guides multimodal differentiation is an important topic. Further, it is unknown whether extrinsic stimulation experienced near sensory sensitivity onset contributes to persistent brain changes, ultimately affecting postnatal behavior. To determine the effects of extrinsic stimulation on multimodal development, we delivered auditory stimulation to bobwhite quail groups during early, middle, or late embryogenesis, and then tested postnatal behavioral responsiveness to auditory or visual cues. Auditory preference tendencies were more consistently toward the conspecific stimulus for animals stimulated during late embryogenesis. Groups stimulated during middle or late embryogenesis showed altered postnatal species-typical visual responsiveness, demonstrating a persistent multimodal effect. We also examined whether auditory-related brain regions are receptive to extrinsic input during middle embryogenesis by measuring postnatal cellular activation. Stimulated birds showed a greater number of ZENK-immunopositive cells per unit volume of brain tissue in deep optic tectum, a midbrain region strongly implicated in multimodal function. We observed similar results in the medial and caudomedial nidopallia in the telencephalon. There were no ZENK differences between groups in inferior colliculus or in caudolateral nidopallium, avian analog to prefrontal cortex. To our knowledge, these are the first results linking extrinsic stimulation delivered so early in embryogenesis to changes in postnatal multimodal behavior and cellular activation. The potential role of competitive interactions between the sensory and motor systems is discussed. PMID:18777564
Lebedeva, I S; Akhadov, T A; Petriaĭkin, A V; Kaleda, V G; Barkhatova, A N; Golubev, S A; Rumiantseva, E E; Vdovenko, A M; Fufaeva, E A; Semenova, N A
2011-01-01
Six patients in the state of remission after the first episode ofjuvenile schizophrenia and seven sex- and age-matched mentally healthy subjects were examined by fMRI and ERP methods. The auditory oddball paradigm was applied. Differences in P300 parameters didn't reach the level of significance, however, a significantly higher hemodynamic response to target stimuli was found in patients bilaterally in the supramarginal gyrus and in the right medial frontal gyrus, which points to pathology of these brain areas in supporting of auditory selective attention.
Price, D; Tyler, L K; Neto Henriques, R; Campbell, K L; Williams, N; Treder, M S; Taylor, J R; Henson, R N A
2017-06-09
Slowing is a common feature of ageing, yet a direct relationship between neural slowing and brain atrophy is yet to be established in healthy humans. We combine magnetoencephalographic (MEG) measures of neural processing speed with magnetic resonance imaging (MRI) measures of white and grey matter in a large population-derived cohort to investigate the relationship between age-related structural differences and visual evoked field (VEF) and auditory evoked field (AEF) delay across two different tasks. Here we use a novel technique to show that VEFs exhibit a constant delay, whereas AEFs exhibit delay that accumulates over time. White-matter (WM) microstructure in the optic radiation partially mediates visual delay, suggesting increased transmission time, whereas grey matter (GM) in auditory cortex partially mediates auditory delay, suggesting less efficient local processing. Our results demonstrate that age has dissociable effects on neural processing speed, and that these effects relate to different types of brain atrophy.
Price, D.; Tyler, L. K.; Neto Henriques, R.; Campbell, K. L.; Williams, N.; Treder, M.S.; Taylor, J. R.; Brayne, Carol; Bullmore, Edward T.; Calder, Andrew C.; Cusack, Rhodri; Dalgleish, Tim; Duncan, John; Matthews, Fiona E.; Marslen-Wilson, William D.; Rowe, James B.; Shafto, Meredith A.; Cheung, Teresa; Davis, Simon; Geerligs, Linda; Kievit, Rogier; McCarrey, Anna; Mustafa, Abdur; Samu, David; Tsvetanov, Kamen A.; van Belle, Janna; Bates, Lauren; Emery, Tina; Erzinglioglu, Sharon; Gadie, Andrew; Gerbase, Sofia; Georgieva, Stanimira; Hanley, Claire; Parkin, Beth; Troy, David; Auer, Tibor; Correia, Marta; Gao, Lu; Green, Emma; Allen, Jodie; Amery, Gillian; Amunts, Liana; Barcroft, Anne; Castle, Amanda; Dias, Cheryl; Dowrick, Jonathan; Fair, Melissa; Fisher, Hayley; Goulding, Anna; Grewal, Adarsh; Hale, Geoff; Hilton, Andrew; Johnson, Frances; Johnston, Patricia; Kavanagh-Williamson, Thea; Kwasniewska, Magdalena; McMinn, Alison; Norman, Kim; Penrose, Jessica; Roby, Fiona; Rowland, Diane; Sargeant, John; Squire, Maggie; Stevens, Beth; Stoddart, Aldabra; Stone, Cheryl; Thompson, Tracy; Yazlik, Ozlem; Barnes, Dan; Dixon, Marie; Hillman, Jaya; Mitchell, Joanne; Villis, Laura; Henson, R. N. A.
2017-01-01
Slowing is a common feature of ageing, yet a direct relationship between neural slowing and brain atrophy is yet to be established in healthy humans. We combine magnetoencephalographic (MEG) measures of neural processing speed with magnetic resonance imaging (MRI) measures of white and grey matter in a large population-derived cohort to investigate the relationship between age-related structural differences and visual evoked field (VEF) and auditory evoked field (AEF) delay across two different tasks. Here we use a novel technique to show that VEFs exhibit a constant delay, whereas AEFs exhibit delay that accumulates over time. White-matter (WM) microstructure in the optic radiation partially mediates visual delay, suggesting increased transmission time, whereas grey matter (GM) in auditory cortex partially mediates auditory delay, suggesting less efficient local processing. Our results demonstrate that age has dissociable effects on neural processing speed, and that these effects relate to different types of brain atrophy. PMID:28598417
Neurophysiological Studies of Auditory Verbal Hallucinations
Ford, Judith M.; Dierks, Thomas; Fisher, Derek J.; Herrmann, Christoph S.; Hubl, Daniela; Kindler, Jochen; Koenig, Thomas; Mathalon, Daniel H.; Spencer, Kevin M.; Strik, Werner; van Lutterveld, Remko
2012-01-01
We discuss 3 neurophysiological approaches to study auditory verbal hallucinations (AVH). First, we describe “state” (or symptom capture) studies where periods with and without hallucinations are compared “within” a patient. These studies take 2 forms: passive studies, where brain activity during these states is compared, and probe studies, where brain responses to sounds during these states are compared. EEG (electroencephalography) and MEG (magnetoencephalography) data point to frontal and temporal lobe activity, the latter resulting in competition with external sounds for auditory resources. Second, we discuss “trait” studies where EEG and MEG responses to sounds are recorded from patients who hallucinate and those who do not. They suggest a tendency to hallucinate is associated with competition for auditory processing resources. Third, we discuss studies addressing possible mechanisms of AVH, including spontaneous neural activity, abnormal self-monitoring, and dysfunctional interregional communication. While most studies show differences in EEG and MEG responses between patients and controls, far fewer show symptom relationships. We conclude that efforts to understand the pathophysiology of AVH using EEG and MEG have been hindered by poor anatomical resolution of the EEG and MEG measures, poor assessment of symptoms, poor understanding of the phenomenon, poor models of the phenomenon, decoupling of the symptoms from the neurophysiology due to medications and comorbidites, and the possibility that the schizophrenia diagnosis breeds truer than the symptoms it comprises. These problems are common to studies of other psychiatric symptoms and should be considered when attempting to understand the basic neural mechanisms responsible for them. PMID:22368236
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
2016-01-01
Abstract Successful language comprehension critically depends on our ability to link linguistic expressions to the entities they refer to. Without reference resolution, newly encountered language cannot be related to previously acquired knowledge. The human experience includes many different types of referents, some visual, some auditory, some very abstract. Does the neural basis of reference resolution depend on the nature of the referents, or do our brains use a modality-general mechanism for linking meanings to referents? Here we report evidence for both. Using magnetoencephalography (MEG), we varied both the modality of referents, which consisted either of visual or auditory objects, and the point at which reference resolution was possible within sentences. Source-localized MEG responses revealed brain activity associated with reference resolution that was independent of the modality of the referents, localized to the medial parietal lobe and starting ∼415 ms after the onset of reference resolving words. A modality-specific response to reference resolution in auditory domains was also found, in the vicinity of auditory cortex. Our results suggest that referential language processing cannot be reduced to processing in classical language regions and representations of the referential domain in modality-specific neural systems. Instead, our results suggest that reference resolution engages medial parietal cortex, which supports a mechanism for referential processing regardless of the content modality. PMID:28058272
ERIC Educational Resources Information Center
Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee
2012-01-01
Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or…
Tinnitus and hyperacusis: Contributions of paraflocculus, reticular formation and stress.
Chen, Yu-Chen; Chen, Guang-Di; Auerbach, Benjamin D; Manohar, Senthilvelan; Radziwon, Kelly; Salvi, Richard
2017-06-01
Tinnitus and hyperacusis are common and potentially serious hearing disorders associated with noise-, age- or drug-induced hearing loss. Accumulating evidence suggests that tinnitus and hyperacusis are linked to excessive neural activity in a distributed brain network that not only includes the central auditory pathway, but also brain regions involved in arousal, emotion, stress and motor control. Here we examine electrophysiological changes in two novel non-auditory areas implicated in tinnitus and hyperacusis: the caudal pontine reticular nucleus (PnC), involved in arousal, and the paraflocculus lobe of the cerebellum (PFL), implicated in head-eye coordination and gating tinnitus and we measure the changes in corticosterone stress hormone levels. Using the salicylate-induced model of tinnitus and hyperacusis, we found that long-latency (>10 ms) sound-evoked response components in both the brain regions were significantly enhanced after salicylate administration, while the short-latency responses were reduced, likely reflecting cochlear hearing loss. These results are consistent with the central gain model of tinnitus and hyperacusis, which proposes that these disorders arise from the amplification of neural activity in central auditory pathway plus other regions linked to arousal, emotion, tinnitus gating and motor control. Finally, we demonstrate that salicylate results in an increase in corticosterone level in a dose-dependent manner consistent with the notion that stress may interact with hearing loss in tinnitus and hyperacusis development. This increased stress response has the potential to have wide-ranging effects on the central nervous system and may therefore contribute to brain-wide changes in neural activity. Copyright © 2017 Elsevier B.V. All rights reserved.
Distinct Neural Stem Cell Populations Give Rise to Disparate Brain Tumors in Response to N-MYC
Swartling, Fredrik J.; Savov, Vasil; Persson, Anders I.; Chen, Justin; Hackett, Christopher S.; Northcott, Paul A.; Grimmer, Matthew R.; Lau, Jasmine; Chesler, Louis; Perry, Arie; Phillips, Joanna J.; Taylor, Michael D.; Weiss, William A.
2012-01-01
SUMMARY The proto-oncogene MYCN is mis-expressed in various types of human brain tumors. To clarify how developmental and regional differences influence transformation, we transduced wild-type or mutationally-stabilized murine N-mycT58A into neural stem cells (NSCs) from perinatal murine cerebellum, brain stem and forebrain. Transplantation of N-mycWT NSCs was insufficient for tumor formation. N-mycT58A cerebellar and brain stem NSCs generated medulloblastoma/primitive neuroectodermal tumors, whereas forebrain NSCs developed diffuse glioma. Expression analyses distinguished tumors generated from these different regions, with tumors from embryonic versus postnatal cerebellar NSCs demonstrating SHH-dependence and SHH-independence, respectively. These differences were regulated in-part by the transcription factor SOX9, activated in the SHH subclass of human medulloblastoma. Our results demonstrate context-dependent transformation of NSCs in response to a common oncogenic signal. PMID:22624711
The Staggered Spondaic Word Test. A ten-minute look at the central nervous system through the ears.
Katz, J; Smith, P S
1991-01-01
We have described three major groupings that encompass most auditory processing difficulties. While the problems may be superimposed upon one another in any individual client, each diagnostic sign is closely associated with particular communication and learning disorders. In addition, these behaviors may be related back to the functional anatomy of the regions that are implicated by the SSW test. The auditory-decoding group is deficient in rapid analysis of speech. The vagueness of speech sound knowledge is thought to lead to auditory misunderstanding and confusion. In early life, this may be reflected in the child's articulation. Poor phonic skills that result from this deficit are thought to contribute to their limited reading and spelling abilities. The auditory tolerance-fading memory group is often thought to have severe auditory-processing problems because those in it are highly distracted by background sounds and have poor auditory memories. However, school performance is not far from grade level, and the resulting reading disabilities stem more from limited comprehension than from an inability to sound out the words. Distractibility and poor auditory memory could contribute to the apparent weakness in reading comprehension. Many of the characteristics of the auditory tolerance-fading memory group are similar to those of attention deficit disorder cases. Both groups are associated anatomically with the AC region. The auditory integration cases can be divided into two subgroups. In the first, the subjects exhibit the most severe reading and spelling problems of the three major categories. These individuals closely resemble the classical dyslexics. We presume that this disorder represents a major disruption in auditory-visual integration. The second subgroup has much less severe learning difficulties, which closely follow the pattern of dysfunction of the auditory tolerance-fading memory group. The excellent physiological procedures to which we have been exposed during this Windows on the Brain conference provide a glimpse of the exciting possibilities for studying brain function. However, in working with individuals who have cognitive impairments, the new technology should be validated by standard behavioral tests. In turn, the new techniques will provide those who use behavioral measures with new parameters and concepts to broaden our understanding. For the past quarter of a century, the SSW test has been compared with other behavioral, physiological, and anatomical procedures. Based on the information that has been assembled, we have been able to classify auditory processing disorders into three major categories.(ABSTRACT TRUNCATED AT 400 WORDS)
Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke
2017-01-01
Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.
2017-01-01
Abstract Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor’s song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM’s established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches (Taeniopygia guttata) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E2 administration on sensory processing. In sensory-aged subjects, E2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E2 sensitivity that each precisely track a key neural “switch point” from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds. PMID:29255797
Long-term exposure to noise impairs cortical sound processing and attention control.
Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto
2004-11-01
Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.
Brain function assessment in different conscious states.
Ozgoren, Murat; Bayazit, Onur; Kocaaslan, Sibel; Gokmen, Necati; Oniz, Adile
2010-06-03
The study of brain functioning is a major challenge in neuroscience fields as human brain has a dynamic and ever changing information processing. Case is worsened with conditions where brain undergoes major changes in so-called different conscious states. Even though the exact definition of consciousness is a hard one, there are certain conditions where the descriptions have reached a consensus. The sleep and the anesthesia are different conditions which are separable from each other and also from wakefulness. The aim of our group has been to tackle the issue of brain functioning with setting up similar research conditions for these three conscious states. In order to achieve this goal we have designed an auditory stimulation battery with changing conditions to be recorded during a 40 channel EEG polygraph (Nuamps) session. The stimuli (modified mismatch, auditory evoked etc.) have been administered both in the operation room and the sleep lab via Embedded Interactive Stimulus Unit which was developed in our lab. The overall study has provided some results for three domains of consciousness. In order to be able to monitor the changes we have incorporated Bispectral Index Monitoring to both sleep and anesthesia conditions. The first stage results have provided a basic understanding in these altered states such that auditory stimuli have been successfully processed in both light and deep sleep stages. The anesthesia provides a sudden change in brain responsiveness; therefore a dosage dependent anesthetic administration has proved to be useful. The auditory processing was exemplified targeting N1 wave, with a thorough analysis from spectrogram to sLORETA. The frequency components were observed to be shifting throughout the stages. The propofol administration and the deeper sleep stages both resulted in the decreasing of N1 component. The sLORETA revealed similar activity at BA7 in sleep (BIS 70) and target propofol concentration of 1.2 microg/mL. The current study utilized similar stimulation and recording system and incorporated BIS dependent values to validate a common approach to sleep and anesthesia. Accordingly the brain has a complex behavior pattern, dynamically changing its responsiveness in accordance with stimulations and states.
Auditory-evoked cortical activity: contribution of brain noise, phase locking, and spectral power
Harris, Kelly C.; Vaden, Kenneth I.; Dubno, Judy R.
2017-01-01
Background The N1-P2 is an obligatory cortical response that can reflect the representation of spectral and temporal characteristics of an auditory stimulus. Traditionally, mean amplitudes and latencies of the prominent peaks in the averaged response are compared across experimental conditions. Analyses of the peaks in the averaged response only reflect a subset of the data contained within the electroencephalogram (EEG) signal. We used single-trial analyses techniques to identify the contribution of brain noise, neural synchrony, and spectral power to the generation of P2 amplitude and how these variables may change across age group. This information is important for appropriate interpretation of event-related potentials (ERPs) results and in understanding of age-related neural pathologies. Methods EEG was measured from 25 younger and 25 older normal hearing adults. Age-related and individual differences in P2 response amplitudes, and variability in brain noise, phase locking value (PLV), and spectral power (4–8 Hz) were assessed from electrode FCz. Model testing and linear regression were used to determine the extent to which brain noise, PLV, and spectral power uniquely predicted P2 amplitudes and varied by age group. Results Younger adults had significantly larger P2 amplitudes, PLV, and power compared to older adults. Brain noise did not differ between age groups. The results of regression testing revealed that brain noise and PLV, but not spectral power were unique predictors of P2 amplitudes. Model fit was significantly better in younger than in older adults. Conclusions ERP analyses are intended to provide a better understanding of the underlying neural mechanisms that contribute to individual and group differences in behavior. The current results support that age-related declines in neural synchrony contribute to smaller P2 amplitudes in older normal hearing adults. Based on our results, we discuss potential models in which differences in neural synchrony and brain noise can account for associations with P2 amplitudes and behavior and potentially provide a better explanation of the neural mechanisms that underlie declines in auditory processing and training benefits. PMID:25046314
Modulation of auditory processing during speech movement planning is limited in adults who stutter
Daliri, Ayoub; Max, Ludo
2015-01-01
Stuttering is associated with atypical structural and functional connectivity in sensorimotor brain areas, in particular premotor, motor, and auditory regions. It remains unknown, however, which specific mechanisms of speech planning and execution are affected by these neurological abnormalities. To investigate pre-movement sensory modulation, we recorded 12 stuttering and 12 nonstuttering adults’ auditory evoked potentials in response to probe tones presented prior to speech onset in a delayed-response speaking condition vs. no-speaking control conditions (silent reading; seeing nonlinguistic symbols). Findings indicate that, during speech movement planning, the nonstuttering group showed a statistically significant modulation of auditory processing (reduced N1 amplitude) that was not observed in the stuttering group. Thus, the obtained results provide electrophysiological evidence in support of the hypothesis that stuttering is associated with deficiencies in modulating the cortical auditory system during speech movement planning. This specific sensorimotor integration deficiency may contribute to inefficient feedback monitoring and, consequently, speech dysfluencies. PMID:25796060
Effects of musical training on the auditory cortex in children.
Trainor, Laurel J; Shahin, Antoine; Roberts, Larry E
2003-11-01
Several studies of the effects of musical experience on sound representations in the auditory cortex are reviewed. Auditory evoked potentials are compared in response to pure tones, violin tones, and piano tones in adult musicians versus nonmusicians as well as in 4- to 5-year-old children who have either had or not had extensive musical experience. In addition, the effects of auditory frequency discrimination training in adult nonmusicians on auditory evoked potentials are examined. It was found that the P2-evoked response is larger in both adult and child musicians than in nonmusicians and that auditory training enhances this component in nonmusician adults. The results suggest that the P2 is particularly neuroplastic and that the effects of musical experience can be seen early in development. They also suggest that although the effects of musical training on cortical representations may be greater if training begins in childhood, the adult brain is also open to change. These results are discussed with respect to potential benefits of early musical training as well as potential benefits of musical experience in aging.
Li, Faith C H; Yen, J C; Chan, Samuel H H; Chang, Alice Y W
2012-02-07
Intoxication from the psychostimulant methamphetamine (METH) because of cardiovascular collapse is a common cause of death within the abuse population. For obvious reasons, the heart has been taken as the primary target for this METH-induced toxicity. The demonstration that failure of brain stem cardiovascular regulation, rather than the heart, holds the key to cardiovascular collapse induced by the pesticide mevinphos implicates another potential underlying mechanism. The present study evaluated the hypothesis that METH effects acute cardiovascular depression by dampening the functional integrity of baroreflex via an action on brain stem nuclei that are associated with this homeostatic mechanism. The distribution of METH in brain and heart on intravenous administration in male Sprague-Dawley rats, and the resultant changes in arterial pressure (AP), heart rate (HR) and indices for baroreflex-mediated sympathetic vasomotor tone and cardiac responses were evaluated, alongside survival rate and time. Intravenous administration of METH (12 or 24 mg/kg) resulted in a time-dependent and dose-dependent distribution of the psychostimulant in brain and heart. The distribution of METH to neural substrates associated with brain stem cardiovascular regulation was significantly larger than brain targets for its neurological and psychological effects; the concentration of METH in cardiac tissues was the lowest among all tissues studied. In animals that succumbed to METH, the baroreflex-mediated sympathetic vasomotor tone and cardiac response were defunct, concomitant with cessation of AP and HR. On the other hand, although depressed, those two indices in animals that survived were maintained, alongside sustainable AP and HR. Linear regression analysis further revealed that the degree of dampening of brain stem cardiovascular regulation was positively and significantly correlated with the concentration of METH in key neural substrate involved in this homeostatic mechanism. We conclude that on intravenous administration, METH exhibits a preferential distribution to brain stem nuclei that are associated with cardiovascular regulation. We further found that the concentration of METH in those brain stem sites dictates the extent that baroreflex-mediated sympathetic vasomotor tone and cardiac responses are compromised, which in turn determines survival or fatality because of cardiovascular collapse.
2012-01-01
Background Intoxication from the psychostimulant methamphetamine (METH) because of cardiovascular collapse is a common cause of death within the abuse population. For obvious reasons, the heart has been taken as the primary target for this METH-induced toxicity. The demonstration that failure of brain stem cardiovascular regulation, rather than the heart, holds the key to cardiovascular collapse induced by the pesticide mevinphos implicates another potential underlying mechanism. The present study evaluated the hypothesis that METH effects acute cardiovascular depression by dampening the functional integrity of baroreflex via an action on brain stem nuclei that are associated with this homeostatic mechanism. Methods The distribution of METH in brain and heart on intravenous administration in male Sprague-Dawley rats, and the resultant changes in arterial pressure (AP), heart rate (HR) and indices for baroreflex-mediated sympathetic vasomotor tone and cardiac responses were evaluated, alongside survival rate and time. Results Intravenous administration of METH (12 or 24 mg/kg) resulted in a time-dependent and dose-dependent distribution of the psychostimulant in brain and heart. The distribution of METH to neural substrates associated with brain stem cardiovascular regulation was significantly larger than brain targets for its neurological and psychological effects; the concentration of METH in cardiac tissues was the lowest among all tissues studied. In animals that succumbed to METH, the baroreflex-mediated sympathetic vasomotor tone and cardiac response were defunct, concomitant with cessation of AP and HR. On the other hand, although depressed, those two indices in animals that survived were maintained, alongside sustainable AP and HR. Linear regression analysis further revealed that the degree of dampening of brain stem cardiovascular regulation was positively and significantly correlated with the concentration of METH in key neural substrate involved in this homeostatic mechanism. Conclusions We conclude that on intravenous administration, METH exhibits a preferential distribution to brain stem nuclei that are associated with cardiovascular regulation. We further found that the concentration of METH in those brain stem sites dictates the extent that baroreflex-mediated sympathetic vasomotor tone and cardiac responses are compromised, which in turn determines survival or fatality because of cardiovascular collapse. PMID:22313577
Isolated brain stem lesion in children: is it acute disseminated encephalomyelitis or not?
Alper, G; Sreedher, G; Zuccoli, G
2013-01-01
Isolated brain stem lesions presenting with acute neurologic findings create a major diagnostic dilemma in children. Although the brain stem is frequently involved in ADEM, solitary brain stem lesions are unusual. We performed a retrospective review in 6 children who presented with an inflammatory lesion confined to the brain stem. Two children were diagnosed with connective tissue disorder, CNS lupus, and localized scleroderma. The etiology could not be determined in 1, and clinical features suggested monophasic demyelination in 3. In these 3 children, initial lesions demonstrated vasogenic edema; all showed dramatic response to high-dose corticosteroids and made a full clinical recovery. Follow-up MRI showed complete resolution of lesions, and none had relapses at >2 years of follow-up. In retrospect, these cases are best regarded as a localized form of ADEM. We conclude that though ADEM is typically a disseminated disease with multifocal lesions, it rarely presents with monofocal demyelination confined to the brain stem.
NASA Astrophysics Data System (ADS)
Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto
2015-02-01
Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.
Sitek, Kevin R.; Cai, Shanqing; Beal, Deryk S.; Perkell, Joseph S.; Guenther, Frank H.; Ghosh, Satrajit S.
2016-01-01
Persistent developmental stuttering is characterized by speech production disfluency and affects 1% of adults. The degree of impairment varies widely across individuals and the neural mechanisms underlying the disorder and this variability remain poorly understood. Here we elucidate compensatory mechanisms related to this variability in impairment using whole-brain functional and white matter connectivity analyses in persistent developmental stuttering. We found that people who stutter had stronger functional connectivity between cerebellum and thalamus than people with fluent speech, while stutterers with the least severe symptoms had greater functional connectivity between left cerebellum and left orbitofrontal cortex (OFC). Additionally, people who stutter had decreased functional and white matter connectivity among the perisylvian auditory, motor, and speech planning regions compared to typical speakers, but greater functional connectivity between the right basal ganglia and bilateral temporal auditory regions. Structurally, disfluency ratings were negatively correlated with white matter connections to left perisylvian regions and to the brain stem. Overall, we found increased connectivity among subcortical and reward network structures in people who stutter compared to controls. These connections were negatively correlated with stuttering severity, suggesting the involvement of cerebellum and OFC may underlie successful compensatory mechanisms by more fluent stutterers. PMID:27199712
ERIC Educational Resources Information Center
Poulsen, Catherine; Picton, Terence W.; Paus, Tomas
2009-01-01
Maturational changes in the capacity to process quickly the temporal envelope of sound have been linked to language abilities in typically developing individuals. As part of a longitudinal study of brain maturation and cognitive development during adolescence, we employed dense-array EEG and spatiotemporal source analysis to characterize…
Vanneste, Sven; De Ridder, Dirk
2012-01-01
Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375
Understanding The Neural Mechanisms Involved In Sensory Control Of Voice Production
Parkinson, Amy L.; Flagmeier, Sabina G.; Manes, Jordan L.; Larson, Charles R.; Rogers, Bill; Robin, Donald A.
2012-01-01
Auditory feedback is important for the control of voice fundamental frequency (F0). In the present study we used neuroimaging to identify regions of the brain responsible for sensory control of the voice. We used a pitch-shift paradigm where subjects respond to an alteration, or shift, of voice pitch auditory feedback with a reflexive change in F0. To determine the neural substrates involved in these audio-vocal responses, subjects underwent fMRI scanning while vocalizing with or without pitch-shifted feedback. The comparison of shifted and unshifted vocalization revealed activation bilaterally in the superior temporal gyrus (STG) in response to the pitch shifted feedback. We hypothesize that the STG activity is related to error detection by auditory error cells located in the superior temporal cortex and efference copy mechanisms whereby this region is responsible for the coding of a mismatch between actual and predicted voice F0. PMID:22406500
Neuronal effects of nicotine during auditory selective attention.
Smucny, Jason; Olincy, Ann; Eichman, Lindsay S; Tregellas, Jason R
2015-06-01
Although the attention-enhancing effects of nicotine have been behaviorally and neurophysiologically well-documented, its localized functional effects during selective attention are poorly understood. In this study, we examined the neuronal effects of nicotine during auditory selective attention in healthy human nonsmokers. We hypothesized to observe significant effects of nicotine in attention-associated brain areas, driven by nicotine-induced increases in activity as a function of increasing task demands. A single-blind, prospective, randomized crossover design was used to examine neuronal response associated with a go/no-go task after 7 mg nicotine or placebo patch administration in 20 individuals who underwent functional magnetic resonance imaging at 3T. The task design included two levels of difficulty (ordered vs. random stimuli) and two levels of auditory distraction (silence vs. noise). Significant treatment × difficulty × distraction interaction effects on neuronal response were observed in the hippocampus, ventral parietal cortex, and anterior cingulate. In contrast to our hypothesis, U and inverted U-shaped dependencies were observed between the effects of nicotine on response and task demands, depending on the brain area. These results suggest that nicotine may differentially affect neuronal response depending on task conditions. These results have important theoretical implications for understanding how cholinergic tone may influence the neurobiology of selective attention.
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash
2015-01-01
The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490
Ouyang, Jessica; Pace, Edward; Lepczyk, Laura; Kaufman, Michael; Zhang, Jessica; Perrine, Shane A; Zhang, Jinsheng
2017-07-07
Blast-induced tinitus is the number one service-connected disability that currently affects military personnel and veterans. To elucidate its underlying mechanisms, we subjected 13 Sprague Dawley adult rats to unilateral 14 psi blast exposure to induce tinnitus and measured auditory and limbic brain activity using manganese-enhanced MRI (MEMRI). Tinnitus was evaluated with a gap detection acoustic startle reflex paradigm, while hearing status was assessed with prepulse inhibition (PPI) and auditory brainstem responses (ABRs). Both anxiety and cognitive functioning were assessed using elevated plus maze and Morris water maze, respectively. Five weeks after blast exposure, 8 of the 13 blasted rats exhibited chronic tinnitus. While acoustic PPI remained intact and ABR thresholds recovered, the ABR wave P1-N1 amplitude reduction persisted in all blast-exposed rats. No differences in spatial cognition were observed, but blasted rats as a whole exhibited increased anxiety. MEMRI data revealed a bilateral increase in activity along the auditory pathway and in certain limbic regions of rats with tinnitus compared to age-matched controls. Taken together, our data suggest that while blast-induced tinnitus may play a role in auditory and limbic hyperactivity, the non-auditory effects of blast and potential traumatic brain injury may also exert an effect.
Long-Lasting Crossmodal Cortical Reorganization Triggered by Brief Postnatal Visual Deprivation.
Collignon, Olivier; Dormal, Giulia; de Heering, Adelaide; Lepore, Franco; Lewis, Terri L; Maurer, Daphne
2015-09-21
Animal and human studies have demonstrated that transient visual deprivation early in life, even for a very short period, permanently alters the response properties of neurons in the visual cortex and leads to corresponding behavioral visual deficits. While it is acknowledged that early-onset and longstanding blindness leads the occipital cortex to respond to non-visual stimulation, it remains unknown whether a short and transient period of postnatal visual deprivation is sufficient to trigger crossmodal reorganization that persists after years of visual experience. In the present study, we characterized brain responses to auditory stimuli in 11 adults who had been deprived of all patterned vision at birth by congenital cataracts in both eyes until they were treated at 9 to 238 days of age. When compared to controls with typical visual experience, the cataract-reversal group showed enhanced auditory-driven activity in focal visual regions. A combination of dynamic causal modeling with Bayesian model selection indicated that this auditory-driven activity in the occipital cortex was better explained by direct cortico-cortical connections with the primary auditory cortex than by subcortical connections. Thus, a short and transient period of visual deprivation early in life leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bendixen, Alexandra; Scharinger, Mathias; Strauß, Antje; Obleser, Jonas
2014-04-01
Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., inaccurate articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative neural mechanisms. The present study targets predictive mechanisms by investigating the influence of a speech segment's predictability on early, modality-specific electrophysiological responses to this segment's omission. Predictability was manipulated in simple physical terms in a single-word framework (Experiment 1) or in more complex semantic terms in a sentence framework (Experiment 2). In both experiments, final consonants of the German words Lachs ([laks], salmon) or Latz ([lats], bib) were occasionally omitted, resulting in the syllable La ([la], no semantic meaning), while brain responses were measured with multi-channel electroencephalography (EEG). In both experiments, the occasional presentation of the fragment La elicited a larger omission response when the final speech segment had been predictable. The omission response occurred ∼125-165 msec after the expected onset of the final segment and showed characteristics of the omission mismatch negativity (MMN), with generators in auditory cortical areas. Suggestive of a general auditory predictive mechanism at work, this main observation was robust against varying source of predictive information or attentional allocation, differing between the two experiments. Source localization further suggested the omission response enhancement by predictability to emerge from left superior temporal gyrus and left angular gyrus in both experiments, with additional experiment-specific contributions. These results are consistent with the existence of predictive coding mechanisms in the central auditory system, and suggestive of the general predictive properties of the auditory system to support spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Activation of neurons in cardiovascular areas of cat brain stem affects spinal reflexes.
Wu, W C; Wang, S D; Liu, J C; Horng, H T; Wayner, M J; Ma, J C; Chai, C Y
1994-01-01
In 65 cats anesthetized with chloralose (40 mg/kg) and urethane (400 mg/kg), the effects of electrical stimulation and microinjection of sodium glutamate (0.25 M, 100-200 nl) in the pressor areas in the rostral brain stem on the evoked L5 ventral root response (EVRR) due to intermittent stimulation of sciatic afferents were compared to stimulating the dorsomedial (DM) and ventrolateral (VLM) medulla. In general, stimulating these rostral brain stem pressor areas including the diencephalon (DIC) and rostral pons (RP) produced increases in systemic arterial pressure (SAP). In most of the cases (85%) there were associated changes in the EVRR, predominantly a decrease in EVRR (72%). Stimulation of the midbrain (MB, principally in the periaqueductal grey) produced decreases in SAP and EVRR. Decreases in EVRR was observed in 91% of the DM and VLM stimulations in which an increase in SAP was produced. This EVRR inhibition was essentially unaltered after acute midcollicular decerebration. Increases in EVRR were also observed and occurred more often in the rostral brain stem than in the medulla. Since changes of both EVRR and SAP could be reproduced by microinjection of Glu into the cardiovascular-reactive areas of the brain stem, this suggests that neuronal perikarya in these areas are responsible for both actions. On some occasions, Glu induced changes in EVRR but not in SAP. This effect occurred more frequently in the rostral brain stem than in the medulla. The present data suggest that separate neuron population exist in the brain stem for the integration of SAP and spinal reflexes.(ABSTRACT TRUNCATED AT 250 WORDS)
A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie
Hanke, Michael; Baumgartner, Florian J.; Ibe, Pierre; Kaule, Falko R.; Pollmann, Stefan; Speck, Oliver; Zinke, Wolf; Stadler, Jörg
2014-01-01
Here we present a high-resolution functional magnetic resonance (fMRI) dataset – 20 participants recorded at high field strength (7 Tesla) during prolonged stimulation with an auditory feature film (“Forrest Gump”). In addition, a comprehensive set of auxiliary data (T1w, T2w, DTI, susceptibility-weighted image, angiography) as well as measurements to assess technical and physiological noise components have been acquired. An initial analysis confirms that these data can be used to study common and idiosyncratic brain response patterns to complex auditory stimulation. Among the potential uses of this dataset are the study of auditory attention and cognition, language and music perception, and social perception. The auxiliary measurements enable a large variety of additional analysis strategies that relate functional response patterns to structural properties of the brain. Alongside the acquired data, we provide source code and detailed information on all employed procedures – from stimulus creation to data analysis. In order to facilitate replicative and derived works, only free and open-source software was utilized. PMID:25977761
Task relevance modulates the behavioural and neural effects of sensory predictions
Friston, Karl J.; Nobre, Anna C.
2017-01-01
The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants’ brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling. PMID:29206225
A bilateral cortical network responds to pitch perturbations in speech feedback
Kort, Naomi S.; Nagarajan, Srikantan S.; Houde, John F.
2014-01-01
Auditory feedback is used to monitor and correct for errors in speech production, and one of the clearest demonstrations of this is the pitch perturbation reflex. During ongoing phonation, speakers respond rapidly to shifts of the pitch of their auditory feedback, altering their pitch production to oppose the direction of the applied pitch shift. In this study, we examine the timing of activity within a network of brain regions thought to be involved in mediating this behavior. To isolate auditory feedback processing relevant for motor control of speech, we used magnetoencephalography (MEG) to compare neural responses to speech onset and to transient (400ms) pitch feedback perturbations during speaking with responses to identical acoustic stimuli during passive listening. We found overlapping, but distinct bilateral cortical networks involved in monitoring speech onset and feedback alterations in ongoing speech. Responses to speech onset during speaking were suppressed in bilateral auditory and left ventral supramarginal gyrus/posterior superior temporal sulcus (vSMG/pSTS). In contrast, during pitch perturbations, activity was enhanced in bilateral vSMG/pSTS, bilateral premotor cortex, right primary auditory cortex, and left higher order auditory cortex. We also found speaking-induced delays in responses to both unaltered and altered speech in bilateral primary and secondary auditory regions, the left vSMG/pSTS and right premotor cortex. The network dynamics reveal the cortical processing involved in both detecting the speech error and updating the motor plan to create the new pitch output. These results implicate vSMG/pSTS as critical in both monitoring auditory feedback and initiating rapid compensation to feedback errors. PMID:24076223
Chung, Wei-Lun; Bidelman, Gavin M
2016-01-01
We examined cross-language differences in neural encoding and tracking of intensity and pitch cues signaling English stress patterns. Auditory mismatch negativities (MMNs) were recorded in English and Mandarin listeners in response to contrastive English pseudowords whose primary stress occurred either on the first or second syllable (i.e., "nocTICity" vs. "NOCticity"). The contrastive syllable stress elicited two consecutive MMNs in both language groups, but English speakers demonstrated larger responses to stress patterns than Mandarin speakers. Correlations between the amplitude of ERPs and continuous changes in the running intensity and pitch of speech assessed how well each language group's brain activity tracked these salient acoustic features of lexical stress. We found that English speakers' neural responses tracked intensity changes in speech more closely than Mandarin speakers (higher brain-acoustic correlation). Findings demonstrate more robust and precise processing of English stress (intensity) patterns in early auditory cortical responses of native relative to nonnative speakers. Copyright © 2016 Elsevier Inc. All rights reserved.
Puschmann, Sebastian; Weerda, Riklef; Klump, Georg; Thiel, Christiane M
2013-05-01
Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.
Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.
ERIC Educational Resources Information Center
Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.
1999-01-01
A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)
The amusic brain: in tune, out of key, and unaware.
Peretz, Isabelle; Brattico, Elvira; Järvenpää, Miika; Tervaniemi, Mari
2009-05-01
Like language, music engagement is universal, complex and present early in life. However, approximately 4% of the general population experiences a lifelong deficit in music perception that cannot be explained by hearing loss, brain damage, intellectual deficiencies or lack of exposure. This musical disorder, commonly known as tone-deafness and now termed congenital amusia, affects mostly the melodic pitch dimension. Congenital amusia is hereditary and is associated with abnormal grey and white matter in the auditory cortex and the inferior frontal cortex. In order to relate these anatomical anomalies to the behavioural expression of the disorder, we measured the electrical brain activity of amusic subjects and matched controls while they monitored melodies for the presence of pitch anomalies. Contrary to current reports, we show that the amusic brain can track quarter-tone pitch differences, exhibiting an early right-lateralized negative brain response. This suggests near-normal neural processing of musical pitch incongruities in congenital amusia. It is important because it reveals that the amusic brain is equipped with the essential neural circuitry to perceive fine-grained pitch differences. What distinguishes the amusic from the normal brain is the limited awareness of this ability and the lack of responsiveness to the semitone changes that violate musical keys. These findings suggest that, in the amusic brain, the neural pitch representation cannot make contact with musical pitch knowledge along the auditory-frontal neural pathway.
Music training relates to the development of neural mechanisms of selective auditory attention.
Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina
2015-04-01
Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Goebel, Rainer; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia
2014-01-01
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex. PMID:24391486
Tan, Ao; Hu, Li; Tu, Yiheng; Chen, Rui; Hung, Yeung Sam; Zhang, Zhiguo
2016-07-01
N1 component of auditory evoked potentials is extensively used to investigate the propagation and processing of auditory inputs. However, the substantial interindividual variability of N1 could be a possible confounding factor when comparing different individuals or groups. Therefore, identifying the neuronal mechanism and origin of the interindividual variability of N1 is crucial in basic research and clinical applications. This study is aimed to use simultaneously recorded electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data to investigate the coupling between N1 and spontaneous functional connectivity (FC). EEG and fMRI data were simultaneously collected from a group of healthy individuals during a pure-tone listening task. Spontaneous FC was estimated from spontaneous blood oxygenation level-dependent (BOLD) signals that were isolated by regressing out task evoked BOLD signals from raw BOLD signals and then was correlated to N1 magnitude across individuals. It was observed that spontaneous FC between bilateral Heschl's gyrus was significantly and positively correlated with N1 magnitude across individuals (Spearman's R = 0.829, p < 0.001). The specificity of this observation was further confirmed by two whole-brain voxelwise analyses (voxel-mirrored homotopic connectivity analysis and seed-based connectivity analysis). These results enriched our understanding of the functional significance of the coupling between event-related brain responses and spontaneous brain connectivity, and hold the potential to increase the applicability of brain responses as a probe to the mechanism underlying pathophysiological conditions.
Vallat, Raphael; Lajnef, Tarek; Eichenlaub, Jean-Baptiste; Berthomier, Christian; Jerbi, Karim; Morlet, Dominique; Ruby, Perrine M.
2017-01-01
High dream recallers (HR) show a larger brain reactivity to auditory stimuli during wakefulness and sleep as compared to low dream recallers (LR) and also more intra-sleep wakefulness (ISW), but no other modification of the sleep macrostructure. To further understand the possible causal link between brain responses, ISW and dream recall, we investigated the sleep microstructure of HR and LR, and tested whether the amplitude of auditory evoked potentials (AEPs) was predictive of arousing reactions during sleep. Participants (18 HR, 18 LR) were presented with sounds during a whole night of sleep in the lab and polysomnographic data were recorded. Sleep microstructure (arousals, rapid eye movements (REMs), muscle twitches (MTs), spindles, KCs) was assessed using visual, semi-automatic and automatic validated methods. AEPs to arousing (awakenings or arousals) and non-arousing stimuli were subsequently computed. No between-group difference in the microstructure of sleep was found. In N2 sleep, auditory arousing stimuli elicited a larger parieto-occipital positivity and an increased late frontal negativity as compared to non-arousing stimuli. As compared to LR, HR showed more arousing stimuli and more long awakenings, regardless of the sleep stage but did not show more numerous or longer arousals. These results suggest that the amplitude of the brain response to stimuli during sleep determine subsequent awakening and that awakening duration (and not arousal) is the critical parameter for dream recall. Notably, our results led us to propose that the minimum necessary duration of an awakening during sleep for a successful encoding of dreams into long-term memory is approximately 2 min. PMID:28377708
2014-01-01
Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing. PMID:24939398
Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution.
Berlot, Eva; Formisano, Elia; De Martino, Federico
2018-05-23
Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels. SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits. Copyright © 2018 the authors 0270-6474/18/384934-09$15.00/0.
Direct Recordings of Pitch Responses from Human Auditory Cortex
Griffiths, Timothy D.; Kumar, Sukhbinder; Sedley, William; Nourski, Kirill V.; Kawasaki, Hiroto; Oya, Hiroyuki; Patterson, Roy D.; Brugge, John F.; Howard, Matthew A.
2010-01-01
Summary Pitch is a fundamental percept with a complex relationship to the associated sound structure [1]. Pitch perception requires brain representation of both the structure of the stimulus and the pitch that is perceived. We describe direct recordings of local field potentials from human auditory cortex made while subjects perceived the transition between noise and a noise with a regular repetitive structure in the time domain at the millisecond level called regular-interval noise (RIN) [2]. RIN is perceived to have a pitch when the rate is above the lower limit of pitch [3], at approximately 30 Hz. Sustained time-locked responses are observed to be related to the temporal regularity of the stimulus, commonly emphasized as a relevant stimulus feature in models of pitch perception (e.g., [1]). Sustained oscillatory responses are also demonstrated in the high gamma range (80–120 Hz). The regularity responses occur irrespective of whether the response is associated with pitch perception. In contrast, the oscillatory responses only occur for pitch. Both responses occur in primary auditory cortex and adjacent nonprimary areas. The research suggests that two types of pitch-related activity occur in humans in early auditory cortex: time-locked neural correlates of stimulus regularity and an oscillatory response related to the pitch percept. PMID:20605456
Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk
2017-05-01
Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Envelope Responses in Single-Trial EEG Indicate Attended Speaker in a Cocktail Party
2013-06-20
users to modulate their brain activity, such as motor rhythms, in order to signal intent [13], but these often require considerable training . Other...BCIs forgo training and instead have subjects make choices by attending to one of multiple visual and/or auditory stimuli. By presenting each stimulus...modulated). An envelope-based BCI could operate on more naturalistic auditory stimuli, such as speech or music . For example, an envelope-based BCI
Timm, Jana; Schönwiesner, Marc; Schröger, Erich; SanMiguel, Iria
2016-07-01
Stimuli caused by our own movements are given special treatment in the brain. Self-generated sounds evoke a smaller brain response than externally generated ones. This attenuated response may reflect a predictive mechanism to differentiate the sensory consequences of one's own actions from other sensory input. It may also relate to the feeling of being the agent of the movement and its effects, but little is known about how sensory suppression of brain responses to self-generated sounds is related to judgments of agency. To address this question, we recorded event-related potentials in response to sounds initiated by button presses. In one condition, participants perceived agency over the production of the sounds, whereas in another condition, participants experience an illusory lack of agency caused by changes in the delay between actions and effects. We compared trials in which the timing of button press and sound was physically identical, but participants' agency judgment differed. Results show reduced amplitudes of the auditory N1 component in response to self-generated sounds irrespective of agency experience, whilst P2 effects correlate with the perception of agency. Our findings suggest that suppression of the auditory N1 component to self-generated sounds does not depend on adaptation to specific action-effect time delays, and does not determine agency judgments, however, the suppression of the P2 component might relate more directly to the experience of agency. Copyright © 2016 Elsevier Ltd. All rights reserved.
Brain atrophy can introduce age-related differences in BOLD response.
Liu, Xueqing; Gerraty, Raphael T; Grinband, Jack; Parker, David; Razlighi, Qolamreza R
2017-04-11
Use of functional magnetic resonance imaging (fMRI) in studies of aging is often hampered by uncertainty about age-related differences in the amplitude and timing of the blood oxygenation level dependent (BOLD) response (i.e., hemodynamic impulse response function (HRF)). Such uncertainty introduces a significant challenge in the interpretation of the fMRI results. Even though this issue has been extensively investigated in the field of neuroimaging, there is currently no consensus about the existence and potential sources of age-related hemodynamic alterations. Using an event-related fMRI experiment with two robust and well-studied stimuli (visual and auditory), we detected a significant age-related difference in the amplitude of response to auditory stimulus. Accounting for brain atrophy by circumventing spatial normalization and processing the data in subjects' native space eliminated these observed differences. In addition, we simulated fMRI data using age differences in brain morphology while controlling HRF shape. Analyzing these simulated fMRI data using standard image processing resulted in differences in HRF amplitude, which were eliminated when the data were analyzed in subjects' native space. Our results indicate that age-related atrophy introduces inaccuracy in co-registration to standard space, which subsequently appears as attenuation in BOLD response amplitude. Our finding could explain some of the existing contradictory reports regarding age-related differences in the fMRI BOLD responses. Hum Brain Mapp, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Taking a Toll on Self-Renewal: TLR-Mediated Innate Immune Signaling in Stem Cells.
Alvarado, Alvaro G; Lathia, Justin D
2016-07-01
Innate immunity has evolved as the front-line cellular defense mechanism to acutely sense and decisively respond to microenvironmental alterations. The Toll-like receptor (TLR) family activates signaling pathways in response to stimuli and is well-characterized in both resident and infiltrating immune cells during neural inflammation, injury, and degeneration. Innate immune signaling has also been observed in neural cells during development and disease, including in the stem and progenitor cells that build the brain and are responsible for its homeostasis. Recently, the activation of developmental programs in malignant brain tumors has emerged as a driver for growth via cancer stem cells. In this review we discuss how innate immune signaling interfaces with stem cell maintenance in the normal and neoplastic brain. Copyright © 2016 Elsevier Ltd. All rights reserved.
... injury, stroke, brain tumors, kidney or liver failure, lipid storage disease, chemical or drug poisoning, or other ... example, is in the brain stem close to structures that are responsible for the startle response, an ...
Smit, Jasper V; Jahanshahi, Ali; Janssen, Marcus L F; Stokroos, Robert J; Temel, Yasin
2017-01-01
Recently it has been shown in animal studies that deep brain stimulation (DBS) of auditory structures was able to reduce tinnitus-like behavior. However, the question arises whether hearing might be impaired when interfering in auditory-related network loops with DBS. The auditory brainstem response (ABR) was measured in rats during high frequency stimulation (HFS) and low frequency stimulation (LFS) in the central nucleus of the inferior colliculus (CIC, n = 5) or dentate cerebellar nucleus (DCBN, n = 5). Besides hearing thresholds using ABR, relative measures of latency and amplitude can be extracted from the ABR. In this study ABR thresholds, interpeak latencies (I-III, III-V, I-V) and V/I amplitude ratio were measured during off-stimulation state and during LFS and HFS. In both the CIC and the CNBN groups, no significant differences were observed for all outcome measures. DBS in both the CIC and the CNBN did not have adverse effects on hearing measurements. These findings suggest that DBS does not hamper physiological processing in the auditory circuitry.
The Sound of Mute Vowels in Auditory Word-Stem Completion
ERIC Educational Resources Information Center
Beland, Renee; Prunet, Jean-Francois; Peretz, Isabelle
2009-01-01
Some studies have argued that orthography can influence speakers when they perform oral language tasks. Words containing a mute vowel provide well-suited stimuli to investigate this phenomenon because mute vowels, such as the second "e" in "vegetable", are present orthographically but absent phonetically. Using an auditory word-stem completion…
Jorge, João; Figueiredo, Patrícia; Gruetter, Rolf; van der Zwaag, Wietske
2018-06-01
External stimuli and tasks often elicit negative BOLD responses in various brain regions, and growing experimental evidence supports that these phenomena are functionally meaningful. In this work, the high sensitivity available at 7T was explored to map and characterize both positive (PBRs) and negative BOLD responses (NBRs) to visual checkerboard stimulation, occurring in various brain regions within and beyond the visual cortex. Recently-proposed accelerated fMRI techniques were employed for data acquisition, and procedures for exclusion of large draining vein contributions, together with ICA-assisted denoising, were included in the analysis to improve response estimation. Besides the visual cortex, significant PBRs were found in the lateral geniculate nucleus and superior colliculus, as well as the pre-central sulcus; in these regions, response durations increased monotonically with stimulus duration, in tight covariation with the visual PBR duration. Significant NBRs were found in the visual cortex, auditory cortex, default-mode network (DMN) and superior parietal lobule; NBR durations also tended to increase with stimulus duration, but were significantly less sustained than the visual PBR, especially for the DMN and superior parietal lobule. Responses in visual and auditory cortex were further studied for checkerboard contrast dependence, and their amplitudes were found to increase monotonically with contrast, linearly correlated with the visual PBR amplitude. Overall, these findings suggest the presence of dynamic neuronal interactions across multiple brain regions, sensitive to stimulus intensity and duration, and demonstrate the richness of information obtainable when jointly mapping positive and negative BOLD responses at a whole-brain scale, with ultra-high field fMRI. © 2018 Wiley Periodicals, Inc.
Functional anatomic studies of memory retrieval for auditory words and visual pictures.
Buckner, R L; Raichle, M E; Miezin, F M; Petersen, S E
1996-10-01
Functional neuroimaging with positron emission tomography was used to study brain areas activated during memory retrieval. Subjects (n = 15) recalled items from a recent study episode (episodic memory) during two paired-associate recall tasks. The tasks differed in that PICTURE RECALL required pictorial retrieval, whereas AUDITORY WORD RECALL required word retrieval. Word REPETITION and REST served as two reference tasks. Comparing recall with repetition revealed the following observations. (1) Right anterior prefrontal activation (similar to that seen in several previous experiments), in addition to bilateral frontal-opercular and anterior cingulate activations. (2) An anterior subdivision of medial frontal cortex [pre-supplementary motor area (SMA)] was activated, which could be dissociated from a more posterior area (SMA proper). (3) Parietal areas were activated, including a posterior medial area near precuneus, that could be dissociated from an anterior parietal area that was deactivated. (4) Multiple medial and lateral cerebellar areas were activated. Comparing recall with rest revealed similar activations, except right prefrontal activation was minimal and activations related to motor and auditory demands became apparent (e.g., bilateral motor and temporal cortex). Directly comparing picture recall with auditory word recall revealed few notable activations. Taken together, these findings suggest a pathway that is commonly used during the episodic retrieval of picture and word stimuli under these conditions. Many areas in this pathway overlap with areas previously activated by a different set of retrieval tasks using stem-cued recall, demonstrating their generality. Examination of activations within individual subjects in relation to structural magnetic resonance images provided an-atomic information about the location of these activations. Such data, when combined with the dissociations between functional areas, provide an increasingly detailed picture of the brain pathways involved in episodic retrieval tasks.
Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan
2015-01-01
An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799
Functional MR imaging assessment of a non-responsive brain injured patient.
Moritz, C H; Rowley, H A; Haughton, V M; Swartz, K R; Jones, J; Badie, B
2001-10-01
Functional magnetic resonance imaging (fMRI) was requested to assist in the evaluation of a comatose 38-year-old woman who had sustained multiple cerebral contusions from a motor vehicle accident. Previous electrophysiologic studies suggested absence of thalamocortical processing in response to median nerve stimulation. Whole-brain fMRI was performed utilizing visual, somatosensory, and auditory stimulation paradigms. Results demonstrated intact task-correlated sensory and cognitive blood oxygen level dependent (BOLD) hemodynamic response to stimuli. Electrodiagnostic studies were repeated and evoked potentials indicated supratentorial recovery in the cerebrum. At 3-months post trauma the patient had recovered many cognitive & sensorimotor functions, accurately reflecting the prognostic fMRI evaluation. These results indicate that fMRI examinations may provide a useful evaluation for brain function in non-responsive brain trauma patients.
Geissler, Diana B; Ehret, Günter
2004-02-01
Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.
Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.
2013-01-01
The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350
Investigating brain response to music: a comparison of different fMRI acquisition schemes.
Mueller, Karsten; Mildner, Toralf; Fritz, Thomas; Lepsien, Jöran; Schwarzbauer, Christian; Schroeter, Matthias L; Möller, Harald E
2011-01-01
Functional magnetic resonance imaging (fMRI) in auditory experiments is a challenge, because the scanning procedure produces considerable noise that can interfere with the auditory paradigm. The noise might either mask the auditory material presented, or interfere with stimuli designed to evoke emotions because it sounds loud and rather unpleasant. Therefore, scanning paradigms that allow interleaved auditory stimulation and image acquisition appear to be advantageous. The sparse temporal sampling (STS) technique uses a very long repetition time in order to achieve a stimulus presentation in the absence of scanner noise. Although only relatively few volumes are acquired for the resulting data sets, there have been recent studies where this method has furthered remarkable results. A new development is the interleaved silent steady state (ISSS) technique. Compared with STS, this method is capable of acquiring several volumes in the time frame between the auditory trials (while the magnetization is kept in a steady state during stimulus presentation). In order to draw conclusions about the optimum fMRI procedure with auditory stimulation, different echo-planar imaging (EPI) acquisition schemes were compared: Continuous scanning, STS, and ISSS. The total acquisition time of each sequence was adjusted to about 12.5 min. The results indicate that the ISSS approach exhibits the highest sensitivity in detecting subtle activity in sub-cortical brain regions. Copyright © 2010 Elsevier Inc. All rights reserved.
Integrating Information from Different Senses in the Auditory Cortex
King, Andrew J.; Walker, Kerry M.M.
2015-01-01
Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035
Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim
2015-06-15
Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim
2015-01-01
Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433
Learning-induced neural plasticity of speech processing before birth
Partanen, Eino; Kujala, Teija; Näätänen, Risto; Liitola, Auli; Sambeth, Anke; Huotilainen, Minna
2013-01-01
Learning, the foundation of adaptive and intelligent behavior, is based on plastic changes in neural assemblies, reflected by the modulation of electric brain responses. In infancy, auditory learning implicates the formation and strengthening of neural long-term memory traces, improving discrimination skills, in particular those forming the prerequisites for speech perception and understanding. Although previous behavioral observations show that newborns react differentially to unfamiliar sounds vs. familiar sound material that they were exposed to as fetuses, the neural basis of fetal learning has not thus far been investigated. Here we demonstrate direct neural correlates of human fetal learning of speech-like auditory stimuli. We presented variants of words to fetuses; unlike infants with no exposure to these stimuli, the exposed fetuses showed enhanced brain activity (mismatch responses) in response to pitch changes for the trained variants after birth. Furthermore, a significant correlation existed between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure. Moreover, the learning effect was generalized to other types of similar speech sounds not included in the training material. Consequently, our results indicate neural commitment specifically tuned to the speech features heard before birth and their memory representations. PMID:23980148
Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆
Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri
2015-01-01
Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493
GABAergic Local Interneurons Shape Female Fruit Fly Response to Mating Songs.
Yamada, Daichi; Ishimoto, Hiroshi; Li, Xiaodong; Kohashi, Tsunehiko; Ishikawa, Yuki; Kamikouchi, Azusa
2018-05-02
Many animals use acoustic signals to attract a potential mating partner. In fruit flies ( Drosophila melanogaster ), the courtship pulse song has a species-specific interpulse interval (IPI) that activates mating. Although a series of auditory neurons in the fly brain exhibit different tuning patterns to IPIs, it is unclear how the response of each neuron is tuned. Here, we studied the neural circuitry regulating the activity of antennal mechanosensory and motor center (AMMC)-B1 neurons, key secondary auditory neurons in the excitatory neural pathway that relay song information. By performing Ca 2+ imaging in female flies, we found that the IPI selectivity observed in AMMC-B1 neurons differs from that of upstream auditory sensory neurons [Johnston's organ (JO)-B]. Selective knock-down of a GABA A receptor subunit in AMMC-B1 neurons increased their response to short IPIs, suggesting that GABA suppresses AMMC-B1 activity at these IPIs. Connection mapping identified two GABAergic local interneurons that synapse with AMMC-B1 and JO-B. Ca 2+ imaging combined with neuronal silencing revealed that these local interneurons, AMMC-LN and AMMC-B2, shape the response pattern of AMMC-B1 neurons at a 15 ms IPI. Neuronal silencing studies further suggested that both GABAergic local interneurons suppress the behavioral response to artificial pulse songs in flies, particularly those with a 15 ms IPI. Altogether, we identified a circuit containing two GABAergic local interneurons that affects the temporal tuning of AMMC-B1 neurons in the song relay pathway and the behavioral response to the courtship song. Our findings suggest that feedforward inhibitory pathways adjust the behavioral response to courtship pulse songs in female flies. SIGNIFICANCE STATEMENT To understand how the brain detects time intervals between sound elements, we studied the neural pathway that relays species-specific courtship song information in female Drosophila melanogaster We demonstrate that the signal transmission from auditory sensory neurons to key secondary auditory neurons antennal mechanosensory and motor center (AMMC)-B1 is the first-step to generate time interval selectivity of neurons in the song relay pathway. Two GABAergic local interneurons are suggested to shape the interval selectivity of AMMC-B1 neurons by receiving auditory inputs and in turn providing feedforward inhibition onto AMMC-B1 neurons. Furthermore, these GABAergic local interneurons suppress the song response behavior in an interval-dependent manner. Our results provide new insights into the neural circuit basis to adjust neuronal and behavioral responses to a species-specific communication sound. Copyright © 2018 the authors 0270-6474/18/384329-19$15.00/0.
Magnetic stem cell targeting to the inner ear
NASA Astrophysics Data System (ADS)
Le, T. N.; Straatman, L.; Yanai, A.; Rahmanian, R.; Garnis, C.; Häfeli, U. O.; Poblete, T.; Westerberg, B. D.; Gregory-Evans, K.
2017-12-01
Severe sensorineural deafness is often accompanied by a loss of auditory neurons in addition to injury of the cochlear epithelium and hair cell loss. Cochlear implant function however depends on a healthy complement of neurons and their preservation is vital in achieving optimal results. We have developed a technique to target mesenchymal stem cells (MSCs) to a deafened rat cochlea. We then assessed the neuroprotective effect of systematically delivered MSCs on the survival and function of spiral ganglion neurons (SGNs). MSCs were labeled with superparamagnetic nanoparticles, injected via the systemic circulation, and targeted using a magnetized cochlea implant and external magnet. Neurotrophic factor concentrations, survival of SGNs, and auditory function were assessed at 1 week and 4 weeks after treatments and compared against multiple control groups. Significant numbers of magnetically targeted MSCs (>30 MSCs/section) were present in the cochlea with accompanied elevation of brain-derived neurotrophic factor and glial cell-derived neurotrophic factor levels (p < 0.001). In addition we saw improved survival of SGNs (approximately 80% survival at 4 weeks). Hearing threshold levels in magnetically targeted rats were found to be significantly better than those of control rats (p < 0.05). These results indicate that magnetic targeting of MSCs to the cochlea can be accomplished with a magnetized cochlear permalloy implant and an external magnet. The targeted stem cells release neurotrophic factors which results in improved SGN survival and hearing recovery. Combining magnetic cell-based therapy and cochlear implantation may improve cochlear implant function in treating deafness.
Threlkeld, Steven W; McClure, Melissa M; Rosen, Glenn D; Fitch, R Holly
2006-09-13
Induction of a focal freeze lesion to the skullcap of a 1-day-old rat pup leads to the formation of microgyria similar to those identified postmortem in human dyslexics. Rats with microgyria exhibit rapid auditory processing deficits similar to those seen in language-impaired (LI) children, and infants at risk for LI and these effects are particularly marked in juvenile as compared to adult subjects. In the current study, a startle response paradigm was used to investigate gap detection in juvenile and adult rats that received bilateral freezing lesions or sham surgery on postnatal day (P) 1, 3 or 5. Microgyria were confirmed in P1 and 3 lesion rats, but not in the P5 lesion group. We found a significant reduction in brain weight and neocortical volume in P1 and 3 lesioned brains relative to shams. Juvenile (P27-39) behavioral data indicated significant rapid auditory processing deficits in all three lesion groups as compared to sham subjects, while adult (P60+) data revealed a persistent disparity only between P1-lesioned rats and shams. Combined results suggest that generalized pathology affecting neocortical development is responsible for the presence of rapid auditory processing deficits, rather than factors specific to the formation of microgyria per se. Finally, results show that the window for the induction of rapid auditory processing deficits through disruption of neurodevelopment appears to extend beyond the endpoint for cortical neuronal migration, although, the persistent deficits exhibited by P1 lesion subjects suggest a secondary neurodevelopmental window at the time of cortical neuromigration representing a peak period of vulnerability.
Kantrowitz, Joshua T; Epstein, Michael L; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M; Revheim, Nadine; Lehrfeld, Nayla P; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C
2016-12-01
Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time-frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908 Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm
Höhne, Johannes; Tangermann, Michael
2014-01-01
Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978
Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari
2012-06-01
Musicians' skills in auditory processing depend highly on instrument, performance practice, and on level of expertise. Yet, it is not known though whether the style/genre of music might shape auditory processing in the brains of musicians. Here, we aimed at tackling the role of musical style/genre on modulating neural and behavioral responses to changes in musical features. Using a novel, fast and musical sounding multi-feature paradigm, we measured the mismatch negativity (MMN), a pre-attentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, rock/pop) and in non-musicians. Jazz and classical musicians scored higher in the musical aptitude test than band musicians and non-musicians, especially with regards to tonal abilities. These results were extended by the MMN findings: jazz musicians had larger MMN-amplitude than all other experimental groups across the six different sound features, indicating a greater overall sensitivity to auditory outliers. In particular, we found enhanced processing of pith and sliding up to pitches in jazz musicians only. Furthermore, we observed a more frontal MMN to pitch and location compared to the other deviants in jazz musicians and left lateralization of the MMN to timbre in classical musicians. These findings indicate that the characteristics of the style/genre of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in a musical context. Musicians' brain is hence shaped by the type of training, musical style/genre, and listening experiences. Copyright © 2012 Elsevier Ltd. All rights reserved.
Rosskothen-Kuhl, Nicole; Hildebrandt, Heika; Birkenhäger, Ralf; Illing, Robert-Benjamin
2018-01-01
Neuron-glia interactions contribute to tissue homeostasis and functional plasticity in the mammalian brain, but it remains unclear how this is achieved. The potential of central auditory brain tissue for stimulation-dependent cellular remodeling was studied in hearing-experienced and neonatally deafened rats. At adulthood, both groups received an intracochlear electrode into the left cochlea and were continuously stimulated for 1 or 7 days after waking up from anesthesia. Normal hearing and deafness were assessed by auditory brainstem responses (ABRs). The effectiveness of stimulation was verified by electrically evoked ABRs as well as immunocytochemistry and in situ hybridization for the immediate early gene product Fos on sections through the auditory midbrain containing the inferior colliculus (IC). Whereas hearing-experienced animals showed a tonotopically restricted Fos response in the IC contralateral to electrical intracochlear stimulation, Fos-positive neurons were found almost throughout the contralateral IC in deaf animals. In deaf rats, the Fos response was accompanied by a massive increase of GFAP indicating astrocytic hypertrophy, and a local activation of microglial cells identified by IBA1. These glia responses led to a noticeable increase of neuron-glia approximations. Moreover, staining for the GABA synthetizing enzymes GAD65 and GAD67 rose significantly in neuronal cell bodies and presynaptic boutons in the contralateral IC of deaf rats. Activation of neurons and glial cells and tissue re-composition were in no case accompanied by cell death as would have been apparent by a Tunel reaction. These findings suggest that growth and activity of glial cells is crucial for the local adjustment of neuronal inhibition to neuronal excitation.
Rosskothen-Kuhl, Nicole; Hildebrandt, Heika; Birkenhäger, Ralf; Illing, Robert-Benjamin
2018-01-01
Neuron–glia interactions contribute to tissue homeostasis and functional plasticity in the mammalian brain, but it remains unclear how this is achieved. The potential of central auditory brain tissue for stimulation-dependent cellular remodeling was studied in hearing-experienced and neonatally deafened rats. At adulthood, both groups received an intracochlear electrode into the left cochlea and were continuously stimulated for 1 or 7 days after waking up from anesthesia. Normal hearing and deafness were assessed by auditory brainstem responses (ABRs). The effectiveness of stimulation was verified by electrically evoked ABRs as well as immunocytochemistry and in situ hybridization for the immediate early gene product Fos on sections through the auditory midbrain containing the inferior colliculus (IC). Whereas hearing-experienced animals showed a tonotopically restricted Fos response in the IC contralateral to electrical intracochlear stimulation, Fos-positive neurons were found almost throughout the contralateral IC in deaf animals. In deaf rats, the Fos response was accompanied by a massive increase of GFAP indicating astrocytic hypertrophy, and a local activation of microglial cells identified by IBA1. These glia responses led to a noticeable increase of neuron–glia approximations. Moreover, staining for the GABA synthetizing enzymes GAD65 and GAD67 rose significantly in neuronal cell bodies and presynaptic boutons in the contralateral IC of deaf rats. Activation of neurons and glial cells and tissue re-composition were in no case accompanied by cell death as would have been apparent by a Tunel reaction. These findings suggest that growth and activity of glial cells is crucial for the local adjustment of neuronal inhibition to neuronal excitation. PMID:29520220
Scheerer, N E; Jacobson, D S; Jones, J A
2016-02-09
Auditory feedback plays an important role in the acquisition of fluent speech; however, this role may change once speech is acquired and individuals no longer experience persistent developmental changes to the brain and vocal tract. For this reason, we investigated whether the role of auditory feedback in sensorimotor learning differs across children and adult speakers. Participants produced vocalizations while they heard their vocal pitch predictably or unpredictably shifted downward one semitone. The participants' vocal pitches were measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback modified subsequent speech motor commands. Sensorimotor learning was observed in both children and adults, with participants' initial vocal pitch increasing following trials where they were exposed to predictable, but not unpredictable, frequency-altered feedback. Participants' vocal pitch was also measured across each vocalization, to index the extent to which the deviant auditory feedback was used to modify ongoing vocalizations. While both children and adults were found to increase their vocal pitch following predictable and unpredictable changes to their auditory feedback, adults produced larger compensatory responses. The results of the current study demonstrate that both children and adults rapidly integrate information derived from their auditory feedback to modify subsequent speech motor commands. However, these results also demonstrate that children and adults differ in their ability to use auditory feedback to generate compensatory vocal responses during ongoing vocalization. Since vocal variability also differed across the children and adult groups, these results also suggest that compensatory vocal responses to frequency-altered feedback manipulations initiated at vocalization onset may be modulated by vocal variability. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Single electrode micro-stimulation of rat auditory cortex: an evaluation of behavioral performance.
Rousche, Patrick J; Otto, Kevin J; Reilly, Mark P; Kipke, Daryl R
2003-05-01
A combination of electrophysiological mapping, behavioral analysis and cortical micro-stimulation was used to explore the interrelation between the auditory cortex and behavior in the adult rat. Auditory discriminations were evaluated in eight rats trained to discriminate the presence or absence of a 75 dB pure tone stimulus. A probe trial technique was used to obtain intensity generalization gradients that described response probabilities to mid-level tones between 0 and 75 dB. The same rats were then chronically implanted in the auditory cortex with a 16 or 32 channel tungsten microwire electrode array. Implanted animals were then trained to discriminate the presence of single electrode micro-stimulation of magnitude 90 microA (22.5 nC/phase). Intensity generalization gradients were created to obtain the response probabilities to mid-level current magnitudes ranging from 0 to 90 microA on 36 different electrodes in six of the eight rats. The 50% point (the current level resulting in 50% detections) varied from 16.7 to 69.2 microA, with an overall mean of 42.4 (+/-8.1) microA across all single electrodes. Cortical micro-stimulation induced sensory-evoked behavior with similar characteristics as normal auditory stimuli. The results highlight the importance of the auditory cortex in a discrimination task and suggest that micro-stimulation of the auditory cortex might be an effective means for a graded information transfer of auditory information directly to the brain as part of a cortical auditory prosthesis.
Fritzsch, Bernd; Beisel, Kirk W.; Hansen, Laura
2014-01-01
Summary The inner ear of mammals uses neurosensory cells derived from the embryonic ear for mechanoelectric transduction of vestibular and auditory stimuli (the hair cells) and conducts this information to the brain via sensory neurons. As with most other neurons of mammals, lost hair cells and sensory neurons are not spontaneously replaced and result instead in age-dependent progressive hearing loss. We review the molecular basis of neurosensory development in the mouse ear to provide a blueprint for possible enhancement of therapeutically useful transformation of stem cells into lost neurosensory cells. We identify several readily available adult sources of stem cells that express, like the ectoderm-derived ear, genes known to be essential for ear development. Use of these stem cells combined with molecular insights into neurosensory cell specification and proliferation regulation of the ear, might allow for neurosensory regeneration of mammalian ears in the near future. PMID:17120192
Paraneoplastic brain stem encephalitis.
Blaes, Franz
2013-04-01
Paraneoplastic brain stem encephalitis can occur as an isolated clinical syndrome or, more often, may be part of a more widespread encephalitis. Different antineuronal autoantibodies, such as anti-Hu, anti-Ri, and anti-Ma2 can be associated with the syndrome, and the most frequent tumors are lung and testicular cancer. Anti-Hu-associated brain stem encephalitis does not normally respond to immunotherapy; the syndrome may stabilize under tumor treatment. Brain stem encephalitis with anti-Ma2 often improves after immunotherapy and/or tumor therapy, whereas only a minority of anti-Ri positive patients respond to immunosuppressants or tumor treatment. The Opsoclonus-myoclonus syndrome (OMS) in children, almost exclusively associated with neuroblastoma, shows a good response to steroids, ACTH, and rituximab, some patients do respond to intravenous immunoglobulins or cyclophosphamide. In adults, OMS is mainly associated with small cell lung cancer or gynecological tumors and only a small part of the patients show improvement after immunotherapy. Earlier diagnosis and treatment seem to be one major problem to improve the prognosis of both, paraneoplastic brain stem encephalitis, and OMS.
Schlund, M W
2000-10-01
Bedside hearing screenings are routinely conducted by speech and language pathologists for brain injury survivors during rehabilitation. Cognitive deficits resulting from brain injury, however, may interfere with obtaining estimates of auditory thresholds. Poor comprehension or attention deficits often compromise patient abilities to follow procedural instructions. This article describes the effects of jointly applying behavioral methods and psychophysical methods to improve two severely brain-injured survivors' attending and reporting on auditory test stimuli presentation. Treatment consisted of stimulus control training that involved differentially reinforcing responding in the presence and absence of an auditory test tone. Subsequent hearing screenings were conducted with novel auditory test tones and a common titration procedure. Results showed that prior stimulus control training improved attending and reporting such that hearing screenings were conducted and estimates of auditory thresholds were obtained.
Post-treatment effects of local GDNF administration to the inner ears of deafened guinea pigs.
Fransson, Anette; Maruyama, Jun; Miller, Josef M; Ulfendahl, Mats
2010-09-01
For patients with profound hearing loss, a cochlear implant is the only treatment available today. The function of a cochlear implant depends in part on the function and survival of spiral ganglion neurons. Following deafferentation, glial cell-derived neurotrophic factor (GDNF) is known to affect spiral ganglion neuron survival. The purpose of this study was to assess delayed GDNF treatment after deafening, the effects of cessation of GDNF treatment, and the effects of subsequent antioxidants on responsiveness and survival of the spiral ganglion neurons. Three-week deafened (by local neomycin administration) guinea pigs were implanted in the scala tympani with a combined cochlear implant electrode and cannula. GDNF (1 μg/mL) or artificial perilymph was then delivered for 4 weeks, following which the animals received systemic ascorbic acid + Trolox or saline for an additional 4 weeks. Thresholds for electrically-evoked auditory brain stem responses (eABRs) were significantly elevated at 3 weeks with deafness, stabilized with GDNF, and showed no change with GDNF cessation and treatment with antioxidants or saline. The populations of spiral ganglion neurons were reduced with deafness (by 40% at 3 weeks and 70% at 11 weeks), and rescued from cell death by GDNF with no further reduction at 8 weeks following 4 weeks of cessation of GDNF treatment equally in both the antioxidant- and saline-treated groups. Local growth factor treatment of the deaf ear may prevent deterioration in electrical responsiveness and rescue auditory nerve cells from death; these effects outlast the period of treatment, and may enhance the benefits of cochlear implant therapy for the deaf.
Alagappan, Dhivyaa; Lazzarino, Deborah A; Felling, Ryan J; Balan, Murugabaskar; Kotenko, Sergei V; Levison, Steven W
2009-01-01
There is an increase in the numbers of neural precursors in the SVZ (subventricular zone) after moderate ischaemic injuries, but the extent of stem cell expansion and the resultant cell regeneration is modest. Therefore our studies have focused on understanding the signals that regulate these processes towards achieving a more robust amplification of the stem/progenitor cell pool. The goal of the present study was to evaluate the role of the EGFR [EGF (epidermal growth factor) receptor] in the regenerative response of the neonatal SVZ to hypoxic/ischaemic injury. We show that injury recruits quiescent cells in the SVZ to proliferate, that they divide more rapidly and that there is increased EGFR expression on both putative stem cells and progenitors. With the amplification of the precursors in the SVZ after injury there is enhanced sensitivity to EGF, but not to FGF (fibroblast growth factor)-2. EGF-dependent SVZ precursor expansion, as measured using the neurosphere assay, is lost when the EGFR is pharmacologically inhibited, and forced expression of a constitutively active EGFR is sufficient to recapitulate the exaggerated proliferation of the neural stem/progenitors that is induced by hypoxic/ischaemic brain injury. Cumulatively, our results reveal that increased EGFR signalling precedes that increase in the abundance of the putative neural stem cells and our studies implicate the EGFR as a key regulator of the expansion of SVZ precursors in response to brain injury. Thus modulating EGFR signalling represents a potential target for therapies to enhance brain repair from endogenous neural precursors following hypoxic/ischaemic and other brain injuries. PMID:19570028
Kubota, Kazuo; Saito, Yoshiaki; Ohba, Chihiro; Saitsu, Hirotomo; Fukuyama, Tetsuhiro; Ishiyama, Akihiko; Saito, Takashi; Komaki, Hirofumi; Nakagawa, Eiji; Sugai, Kenji; Sasaki, Masayuki; Matsumoto, Naomichi
2015-01-01
A boy with spastic paraplegia type 2 (SPG2) due to a novel splice site mutation of PLP1 presented with progressive spasticity of lower limbs, which was first observed during late infancy, when he gained the ability to walk with support. His speech was slow and he had dysarthria. The patient showed mildly delayed intellectual development. Subtotal dysmyelination in the central nervous system was revealed, which was especially prominent in structures known to be myelinated during earlier period, whereas structures that are myelinated later were better myelinated. These findings on the brain magnetic resonance imaging were unusual for subjects with PLP1 mutations. Peaks I and II of the auditory brainstem response (ABR) were normally provoked, but peaks III-V were not clearly demarcated, similarly to the findings in Pelizaeus-Merzbacher disease. These findings of brain MRI and ABR may be characteristic for a subtype of SPG2 patients. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Impey, Danielle; de la Salle, Sara; Baddeley, Ashley; Knott, Verner
2017-05-01
Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a weak constant current to alter cortical excitability and activity temporarily. tDCS-induced increases in neuronal excitability and performance improvements have been observed following anodal stimulation of brain regions associated with visual and motor functions, but relatively little research has been conducted with respect to auditory processing. Recently, pilot study results indicate that anodal tDCS can increase auditory deviance detection, whereas cathodal tDCS decreases auditory processing, as measured by a brain-based event-related potential (ERP), mismatch negativity (MMN). As evidence has shown that tDCS lasting effects may be dependent on N-methyl-D-aspartate (NMDA) receptor activity, the current study investigated the use of dextromethorphan (DMO), an NMDA antagonist, to assess possible modulation of tDCS's effects on both MMN and working memory performance. The study, conducted in 12 healthy volunteers, involved four laboratory test sessions within a randomised, placebo and sham-controlled crossover design that compared pre- and post-anodal tDCS over the auditory cortex (2 mA for 20 minutes to excite cortical activity temporarily and locally) and sham stimulation (i.e. device is turned off) during both DMO (50 mL) and placebo administration. Anodal tDCS increased MMN amplitudes with placebo administration. Significant increases were not seen with sham stimulation or with anodal stimulation during DMO administration. With sham stimulation (i.e. no stimulation), DMO decreased MMN amplitudes. Findings from this study contribute to the understanding of underlying neurobiological mechanisms mediating tDCS sensory and memory improvements.
Human-like brain hemispheric dominance in birdsong learning.
Moorman, Sanne; Gobes, Sharon M H; Kuijpers, Maaike; Kerkhofs, Amber; Zandbergen, Matthijs A; Bolhuis, Johan J
2012-07-31
Unlike nonhuman primates, songbirds learn to vocalize very much like human infants acquire spoken language. In humans, Broca's area in the frontal lobe and Wernicke's area in the temporal lobe are crucially involved in speech production and perception, respectively. Songbirds have analogous brain regions that show a similar neural dissociation between vocal production and auditory perception and memory. In both humans and songbirds, there is evidence for lateralization of neural responsiveness in these brain regions. Human infants already show left-sided dominance in their brain activation when exposed to speech. Moreover, a memory-specific left-sided dominance in Wernicke's area for speech perception has been demonstrated in 2.5-mo-old babies. It is possible that auditory-vocal learning is associated with hemispheric dominance and that this association arose in songbirds and humans through convergent evolution. Therefore, we investigated whether there is similar song memory-related lateralization in the songbird brain. We exposed male zebra finches to tutor or unfamiliar song. We found left-sided dominance of neuronal activation in a Broca-like brain region (HVC, a letter-based name) of juvenile and adult zebra finch males, independent of the song stimulus presented. In addition, juvenile males showed left-sided dominance for tutor song but not for unfamiliar song in a Wernicke-like brain region (the caudomedial nidopallium). Thus, left-sided dominance in the caudomedial nidopallium was specific for the song-learning phase and was memory-related. These findings demonstrate a remarkable neural parallel between birdsong and human spoken language, and they have important consequences for our understanding of the evolution of auditory-vocal learning and its neural mechanisms.
High-fat diet-induced downregulation of anorexic leukemia inhibitory factor in the brain stem.
Licursi, Maria; Alberto, Christian O; Dias, Alex; Hirasawa, Kensuke; Hirasawa, Michiru
2016-11-01
High-fat diet (HFD) is known to induce low-grade hypothalamic inflammation. Whether inflammation occurs in other brain areas remains unknown. This study tested the effect of short-term HFD on cytokine gene expression and identified leukemia inhibitory factor (LIF) as a responsive cytokine in the brain stem. Thus, functional and cellular effects of LIF in the brain stem were investigated. Male rats were fed chow or HFD for 3 days, and then gene expression was analyzed in different brain regions for IL-1β, IL-6, TNF-α, and LIF. The effect of intracerebroventricular injection of LIF on chow intake and body weight was also tested. Patch clamp recording was performed in the nucleus tractus solitarius (NTS). HFD increased pontine TNF-α mRNA while downregulating LIF in all major parts of the brain stem, but not in the hypothalamus or hippocampus. LIF injection into the cerebral aqueduct suppressed food intake without conditioned taste aversion, suggesting that LIF can induce anorexia via lower brain regions without causing malaise. In the NTS, a key brain stem nucleus for food intake regulation, LIF induced acute changes in neuronal excitability. HFD-induced downregulation of anorexic LIF in the brain stem may provide a permissive condition for HFD overconsumption. This may be at least partially mediated by the NTS. © 2016 The Obesity Society.
Civilisations of the Left Cerebral Hemisphere?
ERIC Educational Resources Information Center
Racle, Gabriel L.
Research conducted by Tadanobu Tsunoda on auditory and visual sensation, designed to test and understand the functions of the cerebral hemispheres, is discussed. Tsunoda discovered that the Japanese responses to sounds by the left and the right sides of the brain are very different from the responses obtained from people from other countries. His…
Carnell, Susan; Benson, Leora; Pantazatos, Spiro P; Hirsch, Joy; Geliebter, Allan
2014-11-01
The obesogenic environment is pervasive, yet only some people become obese. The aim was to investigate whether obese individuals show differential neural responses to visual and auditory food cues, independent of cue modality. Obese (BMI 29-41, n = 10) and lean (BMI 20-24, n = 10) females underwent fMRI scanning during presentation of auditory (spoken word) and visual (photograph) cues representing high-energy-density (ED) and low-ED foods. The effect of obesity on whole-brain activation, and on functional connectivity with the midbrain/VTA, was examined. Obese compared with lean women showed greater modality-independent activation of the midbrain/VTA and putamen in response to high-ED (vs. low-ED) cues, as well as relatively greater functional connectivity between the midbrain/VTA and cerebellum (P < 0.05 corrected). Heightened modality-independent responses to food cues within the midbrain/VTA and putamen, and altered functional connectivity between the midbrain/VTA and cerebellum, could contribute to excessive food intake in obese individuals. © 2014 The Obesity Society.
Chen, Yu-Chen; Li, Xiaowei; Liu, Lijie; Wang, Jian; Lu, Chun-Qiang; Yang, Ming; Jiao, Yun; Zang, Feng-Chao; Radziwon, Kelly; Chen, Guang-Di; Sun, Wei; Krishnan Muthaiah, Vijaya Prakash; Salvi, Richard; Teng, Gao-Jun
2015-01-01
Hearing loss often triggers an inescapable buzz (tinnitus) and causes everyday sounds to become intolerably loud (hyperacusis), but exactly where and how this occurs in the brain is unknown. To identify the neural substrate for these debilitating disorders, we induced both tinnitus and hyperacusis with an ototoxic drug (salicylate) and used behavioral, electrophysiological, and functional magnetic resonance imaging (fMRI) techniques to identify the tinnitus–hyperacusis network. Salicylate depressed the neural output of the cochlea, but vigorously amplified sound-evoked neural responses in the amygdala, medial geniculate, and auditory cortex. Resting-state fMRI revealed hyperactivity in an auditory network composed of inferior colliculus, medial geniculate, and auditory cortex with side branches to cerebellum, amygdala, and reticular formation. Functional connectivity revealed enhanced coupling within the auditory network and segments of the auditory network and cerebellum, reticular formation, amygdala, and hippocampus. A testable model accounting for distress, arousal, and gating of tinnitus and hyperacusis is proposed. DOI: http://dx.doi.org/10.7554/eLife.06576.001 PMID:25962854
Dyslexia risk gene relates to representation of sound in the auditory brainstem.
Neef, Nicole E; Müller, Bent; Liebig, Johanna; Schaadt, Gesa; Grigutsch, Maren; Gunter, Thomas C; Wilcke, Arndt; Kirsten, Holger; Skeide, Michael A; Kraft, Indra; Kraus, Nina; Emmrich, Frank; Brauer, Jens; Boltze, Johannes; Friederici, Angela D
2017-04-01
Dyslexia is a reading disorder with strong associations with KIAA0319 and DCDC2. Both genes play a functional role in spike time precision of neurons. Strikingly, poor readers show an imprecise encoding of fast transients of speech in the auditory brainstem. Whether dyslexia risk genes are related to the quality of sound encoding in the auditory brainstem remains to be investigated. Here, we quantified the response consistency of speech-evoked brainstem responses to the acoustically presented syllable [da] in 159 genotyped, literate and preliterate children. When controlling for age, sex, familial risk and intelligence, partial correlation analyses associated a higher dyslexia risk loading with KIAA0319 with noisier responses. In contrast, a higher risk loading with DCDC2 was associated with a trend towards more stable responses. These results suggest that unstable representation of sound, and thus, reduced neural discrimination ability of stop consonants, occurred in genotypes carrying a higher amount of KIAA0319 risk alleles. Current data provide the first evidence that the dyslexia-associated gene KIAA0319 can alter brainstem responses and impair phoneme processing in the auditory brainstem. This brain-gene relationship provides insight into the complex relationships between phenotype and genotype thereby improving the understanding of the dyslexia-inherent complex multifactorial condition. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
PSEN1 and PSEN2 gene expression in Alzheimer's disease brain: a new approach.
Delabio, Roger; Rasmussen, Lucas; Mizumoto, Igor; Viani, Gustavo-Arruda; Chen, Elizabeth; Villares, João; Costa, Isabela-Bazzo; Turecki, Gustavo; Linde, Sandra Aparecido; Smith, Marilia Cardoso; Payão, Spencer-Luiz
2014-01-01
Presenilin 1 (PSEN1) and presenilin 2 (PSEN2) genes encode the major component of y-secretase, which is responsible for sequential proteolytic cleavages of amyloid precursor proteins and the subsequent formation of amyloid-β peptides. 150 RNA samples from the entorhinal cortex, auditory cortex and hippocampal regions of individuals with Alzheimer's disease (AD) and controls elderly subjects were analyzed with using real-time rtPCR. There were no differences between groups for PSEN1 expression. PSEN2 was significantly downregulated in the auditory cortex of AD patients when compared to controls and when compared to other brain regions of the patients. Alteration in PSEN2 expression may be a risk factor for AD.
Auditory mismatch impairments are characterized by core neural dysfunctions in schizophrenia
Gaebler, Arnim Johannes; Mathiak, Klaus; Koten, Jan Willem; König, Andrea Anna; Koush, Yury; Weyer, David; Depner, Conny; Matentzoglu, Simeon; Edgar, James Christopher; Willmes, Klaus; Zvyagintsev, Mikhail
2015-01-01
Major theories on the neural basis of schizophrenic core symptoms highlight aberrant salience network activity (insula and anterior cingulate cortex), prefrontal hypoactivation, sensory processing deficits as well as an impaired connectivity between temporal and prefrontal cortices. The mismatch negativity is a potential biomarker of schizophrenia and its reduction might be a consequence of each of these mechanisms. In contrast to the previous electroencephalographic studies, functional magnetic resonance imaging may disentangle the involved brain networks at high spatial resolution and determine contributions from localized brain responses and functional connectivity to the schizophrenic impairments. Twenty-four patients and 24 matched control subjects underwent functional magnetic resonance imaging during an optimized auditory mismatch task. Haemodynamic responses and functional connectivity were compared between groups. These data sets further entered a diagnostic classification analysis to assess impairments on the individual patient level. In the control group, mismatch responses were detected in the auditory cortex, prefrontal cortex and the salience network (insula and anterior cingulate cortex). Furthermore, mismatch processing was associated with a deactivation of the visual system and the dorsal attention network indicating a shift of resources from the visual to the auditory domain. The patients exhibited reduced activation in all of the respective systems (right auditory cortex, prefrontal cortex, and the salience network) as well as reduced deactivation of the visual system and the dorsal attention network. Group differences were most prominent in the anterior cingulate cortex and adjacent prefrontal areas. The latter regions also exhibited a reduced functional connectivity with the auditory cortex in the patients. In the classification analysis, haemodynamic responses yielded a maximal accuracy of 83% based on four features; functional connectivity data performed similarly or worse for up to about 10 features. However, connectivity data yielded a better performance when including more than 10 features yielding up to 90% accuracy. Among others, the most discriminating features represented functional connections between the auditory cortex and the anterior cingulate cortex as well as adjacent prefrontal areas. Auditory mismatch impairments incorporate major neural dysfunctions in schizophrenia. Our data suggest synergistic effects of sensory processing deficits, aberrant salience attribution, prefrontal hypoactivation as well as a disrupted connectivity between temporal and prefrontal cortices. These deficits are associated with subsequent disturbances in modality-specific resource allocation. Capturing different schizophrenic core dysfunctions, functional magnetic resonance imaging during this optimized mismatch paradigm reveals processing impairments on the individual patient level, rendering it a potential biomarker of schizophrenia. PMID:25743635
Brain stem audiometry may supply markers for diagnostic and therapeutic control in psychiatry.
Nielzén, Sören; Holmberg, Jens; Sköld, Mia; Nehlstedt, Sara
2016-10-06
The purpose of the present study is to try an alternative way of analyzing the ABR (Auditory Brainstem Response). The stimuli were complex sounds (c-ABR) as used in earlier studies. It was further aimed at corroborating earlier findings that this method can discriminate several neuropsychiatric states. Forty healthy control subjects, 26 subjects with the diagnosis schizophrenia (Sz) and 33 with ADHD (Attention deficit hyperactivity disorder) were recruited for the study. The ABRs were recorded. The analysis was based on calculation of areas of significantly group different time spans in the waves. Both latency and amplitude were thereby influential. The spans of differences were quantified for each subject in relation to the total area of the curve which made comparisons balanced. The results showed highly significant differences between the study groups. The results are important for future work on identifying markers for neuropsychiatric clinical use. To reach that goal calls for more extensive studies than this preliminary one. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
[Clinical features of a Chinese pedigree with Waardenburg syndrome type 2].
Yang, Shu-zhi; Yuan, Hui-jun; Bai, Lin-na; Cao, Ju-yang; Xu, Ye; Shen, Wei-dong; Ji, Fei; Yang, Wei-yan
2005-10-12
To investigate detailed clinical features of a Chinese pedigree with Waardenburg syndrome type 2. Members of this pedigree were interviewed to identify personal or family medical histories of hearing loss, the use of aminoglycosides, and other clinical abnormalities by filling questionnaire. The audiological and other clinical evaluations of the proband and other members of this family were conducted, including pure-tone audiometry, immittance and auditory brain-stem response and ophthalmological, dermatologic, hair, temporal bone CT examinations. This family is categorized as Waardenburg syndrome type 2 according to its clinical features. It's an autosomal dominant disorder with incomplete penetrance. The clinical features varied greatly among family members and characterized by sensorineural hearing loss, heterochromia irides, freckle on the face and premature gray hair. Hearing loss can be unilateral or bilateral, congenital or late onset in this family. This Chinese family has some unique clinical features comparing with the international diagnostic criteria for Waardenburg syndrome. This study may provide some evidences to amend the diagnostic criteria for Waardenburg syndrome in Chinese population.
Electronystagmography and audio potentials in space flight
NASA Technical Reports Server (NTRS)
Thornton, William E.; Biggers, W. P.; Pool, Sam L.; Thomas, W. G.; Thagard, Norman E.
1985-01-01
Beginning with the fourth flight of the Space Transport System (STS-4), objective measurements of inner ear function were conducted in near-zero G conditions in earth orbit. The problem of space motion sickness (SMS) was approached much like any disequilibrium problem encountered clinically. However, objective testing techniques had built-in limitations superimposed by the strict parameters inherent in each mission. An attempt was made to objectively characterize SMS, and to first ascertain whether the objective measurements indicated that this disorder was of peripheral or central origin. Electronystagmography and auditory brain stem response recordings were the primary investigative tools. One of the authors (W.E.T.) was a mission specialist on board the orbiter Challenger on the eighth shuttle mission (STS-8) and had the opportunity to make direct and personal observations regarding SMS, an opportunity which has added immeasurably to our understanding of this disorder. Except for two abnormal ENG records, which remain to be explained, the remaining ENG records and all the ABR records made in the weightless environment of space were normal.
Human inferior colliculus activity relates to individual differences in spoken language learning.
Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M
2012-03-01
A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.
Henry, Molly J.; Herrmann, Björn; Kunke, Dunja; Obleser, Jonas
2017-01-01
Healthy aging is accompanied by listening difficulties, including decreased speech comprehension, that stem from an ill-understood combination of sensory and cognitive changes. Here, we use electroencephalography to demonstrate that auditory neural oscillations of older adults entrain less firmly and less flexibly to speech-paced (∼3 Hz) rhythms than younger adults’ during attentive listening. These neural entrainment effects are distinct in magnitude and origin from the neural response to sound per se. Non-entrained parieto-occipital alpha (8–12 Hz) oscillations are enhanced in young adults, but suppressed in older participants, during attentive listening. Entrained neural phase and task-induced alpha amplitude exert opposite, complementary effects on listening performance: higher alpha amplitude is associated with reduced entrainment-driven behavioural performance modulation. Thus, alpha amplitude as a task-driven, neuro-modulatory signal can counteract the behavioural corollaries of neural entrainment. Balancing these two neural strategies may present new paths for intervention in age-related listening difficulties. PMID:28654081
Functional evaluation of a cell replacement therapy in the inner ear
Hu, Zhengqing; Ulfendahl, Mats; Prieskorn, Diane M.; Olivius, N. Petri; Miller, Josef M.
2015-01-01
Hypothesis Cell replacement therapy in the inner ear will contribute to the functional recovery of hearing loss. Background Cell replacement therapy is a potentially powerful approach to replace degenerated or severely damaged spiral ganglion neurons. This study aimed at stimulating the neurite outgrowth of the implanted neurons and enhancing the potential therapeutic of inner ear cell implants. Methods Chronic electrical stimulation (CES) and exogenous neurotrophic growth factor (NGF) were applied to 46 guinea pigs transplanted with embryonic dorsal root ganglion (DRG) neurons four days post deafening. The animals were evaluated with the electrically-evoked auditory brain stem responses (EABRs) at experimental day 7, 11, 17, 24, 31. The animals were euthanized at day 31 and the inner ears were dissected out for immunohistochemistry investigation. Results Implanted DRG cells, identified by EGFP fluorescence and a neuronal marker, were found close to Rosenthal's canal in the adult inner ear for up to four weeks following transplantation. Extensive neurite projections clearly, greater than in non-treated animals, were observed to penetrate the bony modiolus and reach the spiral ganglion region in animals supplied with CES and/or NGF. There was, however, no significant difference in the thresholds of EABRs between DRG-transplanted-animals supplied with CES and/or NGF and DRG-transplanted animals without CES or NGF supplement. Conclusions The results suggest that CES and/or NGF can stimulate neurite outgrowth from implanted neurons, although based on EABR measurement these interventions did not induce functional connections to the central auditory pathway. Additional time or novel approaches may enhance functional responsiveness of implanted cells in the adult cochlea. PMID:19395986
Escera, Carles; Leung, Sumie; Grimm, Sabine
2014-07-01
Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100-200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20-30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding-rather than on refractoriness-occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.
Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H
2016-07-06
During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we studied the behavioral consequences of adding different types of auditory distractors in a visual selective attention task in wild-type and α-9 nicotinic receptor knock-out (KO) mice. We demonstrate that KO mice perform poorly in the selective attention paradigm and that an intact medial olivocochlear transmission aids in ignoring auditory distractors during attention. Copyright © 2016 the authors 0270-6474/16/367198-12$15.00/0.
Prenatal Nicotine Exposure Disrupts Infant Neural Markers of Orienting.
King, Erin; Campbell, Alana; Belger, Aysenil; Grewen, Karen
2018-06-07
Prenatal nicotine exposure (PNE) from maternal cigarette smoking is linked to developmental deficits, including impaired auditory processing, language, generalized intelligence, attention, and sleep. Fetal brain undergoes massive growth, organization, and connectivity during gestation, making it particularly vulnerable to neurotoxic insult. Nicotine binds to nicotinic acetylcholine receptors, which are extensively involved in growth, connectivity, and function of developing neural circuitry and neurotransmitter systems. Thus, PNE may have long-term impact on neurobehavioral development. The purpose of this study was to compare the auditory K-complex, an event-related potential reflective of auditory gating, sleep preservation and memory consolidation during sleep, in infants with and without PNE and to relate these neural correlates to neurobehavioral development. We compared brain responses to an auditory paired-click paradigm in 3- to 5-month-old infants during Stage 2 sleep, when the K-complex is best observed. We measured component amplitude and delta activity during the K-complex. Infants with PNE demonstrated significantly smaller amplitude of the N550 component and reduced delta-band power within elicited K-complexes compared to nonexposed infants and also were less likely to orient with a head turn to a novel auditory stimulus (bell ring) when awake. PNE may impair auditory sensory gating, which may contribute to disrupted sleep and to reduced auditory discrimination and learning, attention re-orienting, and/or arousal during wakefulness reported in other studies. Links between PNE and reduced K-complex amplitude and delta power may represent altered cholinergic and GABAergic synaptic programming and possibly reflect early neural bases for PNE-linked disruptions in sleep quality and auditory processing. These may pose significant disadvantage for language acquisition, attention, and social interaction necessary for academic and social success.
Biomechanics of Concussion: The Importance of Neck Tension
NASA Astrophysics Data System (ADS)
Jadischke, Ronald
Linear and angular velocity and acceleration of the head are typically correlated to concussion. Despite improvements in helmet performance to reduce accelerations, a corresponding reduction in the incidence of concussion has not occurred (National Football League [NFL] 1996-present). There is compelling research that forces on and deformation to the brain stem are related to concussion. The brain stem is the center of control for respiration, blood pressure and heart rate and is the root of most cranial nerves. Injury to the brain stem is consistent with most symptoms of concussion reported in the National Football League and the National Hockey League, such as headaches, neck pain, dizziness, and blurred vision. In the Hybrid III anthropomorphic test device (ATD), the upper neck load cell is in close proximity to the human brain stem. This study found that the additional mass of a football helmet onto the Hybrid III headform increases the upper neck forces and moments in response to helmet-to-helmet impact and helmet-to-chest impacts. A new laboratory impactor device was constructed to simulate collisions using two moving Hybrid III ATDs. The impactor was used to recreate on-field collisions (n = 20) in American football while measuring head, neck and upper torso kinematics. A strong correlation between upper neck forces, upper neck power and the estimated strains and strain rates along the axis of the upper cervical spinal cord and brain stem and concussion was found. These biomechanical responses should be added to head kinematic responses for a more comprehensive evaluation of concussion.
Animal model of cochlear third window in the scala vestibuli or scala tympani.
Attias, Joseph; Preis, Michal; Shemesh, Rafi; Hadar, Tuvia; Nageris, Ben I
2010-08-01
The auditory impact of a cochlear third window differs by its location in the scala vestibuli or scala tympani. Pathologic third window has been investigated primarily in the vestibular apparatus of animals and humans. Dehiscence of the superior semicircular canal is the clinical model. Fat sand rats (n = 11) have a unique inner-ear anatomy that allows easy surgical access. A window was drilled in the bony labyrinth over the scala vestibuli in 1 group (12 ears) and over the scala tympani in another (7 ears) while preserving the membranous labyrinth. Auditory brain stem responses to high- and low-frequency stimuli delivered by air and bone conduction were recorded before and after the procedure. Scala vestibuli group: preoperative air-conduction thresholds to clicks and tone-bursts averaged 8.3 and 9.6 dB, respectively, and bone-conduction thresholds, 4.6 and 3.3 dB, respectively; after fenestration, air-conduction thresholds averaged 40.4 and 41.8 dB, respectively, and bone-conduction thresholds, -1 and 5.6 dB, respectively. Scala tympani group: preoperative air-conduction thresholds to clicks and tone-bursts averaged 8.6 dB each, and bone-conduction thresholds, 7.9 dB and 7.1 dB, respectively; after fenestration, air-conduction thresholds averaged 11.4 and 9.3 dB, respectively, and bone-conduction thresholds, 9.3 and 4.2 dB, respectively. The changes in air- (p = 0.0001) and bone-conduction (p = 0.04) thresholds were statistically significant only in the scala vestibuli group. The presence of a cochlear third window over the scala vestibuli, but not over the scala tympani, causes a significant increase in air-conduction auditory thresholds. These results agree with the theoretic model and clinical findings and contribute to our understanding of vestibular dehiscence.
Söderlund, Göran B. W.; Jobs, Elisabeth Nilsson
2016-01-01
The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6–9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure. PMID:26858679
Maturation of auditory neural processes in autism spectrum disorder - A longitudinal MEG study.
Port, Russell G; Edgar, J Christopher; Ku, Matthew; Bloy, Luke; Murray, Rebecca; Blaskey, Lisa; Levy, Susan E; Roberts, Timothy P L
2016-01-01
Individuals with autism spectrum disorder (ASD) show atypical brain activity, perhaps due to delayed maturation. Previous studies examining the maturation of auditory electrophysiological activity have been limited due to their use of cross-sectional designs. The present study took a first step in examining magnetoencephalography (MEG) evidence of abnormal auditory response maturation in ASD via the use of a longitudinal design. Initially recruited for a previous study, 27 children with ASD and nine typically developing (TD) children, aged 6- to 11-years-old, were re-recruited two to five years later. At both timepoints, MEG data were obtained while participants passively listened to sinusoidal pure-tones. Bilateral primary/secondary auditory cortex time domain (100 ms evoked response latency (M100)) and spectrotemporal measures (gamma-band power and inter-trial coherence (ITC)) were examined. MEG measures were also qualitatively examined for five children who exhibited "optimal outcome", participants who were initially on spectrum, but no longer met diagnostic criteria at follow-up. M100 latencies were delayed in ASD versus TD at the initial exam (~ 19 ms) and at follow-up (~ 18 ms). At both exams, M100 latencies were associated with clinical ASD severity. In addition, gamma-band evoked power and ITC were reduced in ASD versus TD. M100 latency and gamma-band maturation rates did not differ between ASD and TD. Of note, the cohort of five children that demonstrated "optimal outcome" additionally exhibited M100 latency and gamma-band activity mean values in-between TD and ASD at both timepoints. Though justifying only qualitative interpretation, these "optimal outcome" related data are presented here to motivate future studies. Children with ASD showed perturbed auditory cortex neural activity, as evidenced by M100 latency delays as well as reduced transient gamma-band activity. Despite evidence for maturation of these responses in ASD, the neural abnormalities in ASD persisted across time. Of note, data from the five children whom demonstrated "optimal outcome" qualitatively suggest that such clinical improvements may be associated with auditory brain responses intermediate between TD and ASD. These "optimal outcome" related results are not statistically significant though, likely due to the low sample size of this cohort, and to be expected as a result of the relatively low proportion of "optimal outcome" in the ASD population. Thus, further investigations with larger cohorts are needed to determine if the above auditory response phenotypes have prognostic utility, predictive of clinical outcome.
Christie, Kimberly J.; Turnley, Ann M.
2012-01-01
Neural stem/precursor cells in the adult brain reside in the subventricular zone (SVZ) of the lateral ventricles and the subgranular zone (SGZ) of the dentate gyrus in the hippocampus. These cells primarily generate neuroblasts that normally migrate to the olfactory bulb (OB) and the dentate granule cell layer respectively. Following brain damage, such as traumatic brain injury, ischemic stroke or in degenerative disease models, neural precursor cells from the SVZ in particular, can migrate from their normal route along the rostral migratory stream (RMS) to the site of neural damage. This neural precursor cell response to neural damage is mediated by release of endogenous factors, including cytokines and chemokines produced by the inflammatory response at the injury site, and by the production of growth and neurotrophic factors. Endogenous hippocampal neurogenesis is frequently also directly or indirectly affected by neural damage. Administration of a variety of factors that regulate different aspects of neural stem/precursor biology often leads to improved functional motor and/or behavioral outcomes. Such factors can target neural stem/precursor proliferation, survival, migration and differentiation into appropriate neuronal or glial lineages. Newborn cells also need to subsequently survive and functionally integrate into extant neural circuitry, which may be the major bottleneck to the current therapeutic potential of neural stem/precursor cells. This review will cover the effects of a range of intrinsic and extrinsic factors that regulate neural stem/precursor cell functions. In particular it focuses on factors that may be harnessed to enhance the endogenous neural stem/precursor cell response to neural damage, highlighting those that have already shown evidence of preclinical effectiveness and discussing others that warrant further preclinical investigation. PMID:23346046
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W
2013-11-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in white noise. Relative to control stimuli that contain no inter-aural timing differences, dichotic pitch stimuli typically elicit an object related negativity (ORN) response, associated with the perceptual segregation of the tone and the carrier noise into distinct auditory objects. Autistic children failed to demonstrate an ORN, suggesting a failure of segregation; however, comparison with the ORNs of age-matched typically developing controls narrowly failed to attain significance. More striking, the autistic children demonstrated a significant differential response to the pitch stimulus, peaking at around 50 ms. This was not present in the control group, nor has it been found in other groups tested using similar stimuli. This response may be a neural signature of atypical processing of pitch in at least some autistic individuals.
Extrathalamic Modulation of Cortical Responsiveness
1994-08-01
1988). McEntee, W. J. & Mair, R. G. (1990). The Korsakoff syndrome : a Clonidine improves memory function in schizophrenia indepen- neurochemical...cognition and putati% e neurotransmitters on neuronal activity in monkey auditory rCBF in Korsakoffs psychosis. Psychological Medicine (in the cortw. Brain
Entrainment to an auditory signal: Is attention involved?
Kunert, Richard; Jongman, Suzanne R
2017-01-01
Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Musical and verbal short-term memory: insights from neurodevelopmental and neurological disorders.
Caclin, Anne; Tillmann, Barbara
2018-05-09
Auditory short-term memory (STM) is a fundamental ability to make sense of auditory information as it unfolds over time. Whether separate STM systems exist for different types of auditory information (music and speech, in particular) is a matter of debate. The present paper reviews studies that have investigated both musical and verbal STM in healthy individuals and in participants with neurodevelopmental and neurological disorders. Overall, the results are in favor of only partly shared networks for musical and verbal STM. Evidence for a distinction in STM for the two materials stems from (1) behavioral studies in healthy participants, in particular from the comparison between nonmusicians and musicians; (2) behavioral studies in congenital amusia, where a selective pitch STM deficit is observed; and (3) studies in brain-damaged patients with cases of double dissociation. In this review we highlight the need for future studies comparing STM for the same perceptual dimension (e.g., pitch) in different materials (e.g., music and speech), as well as for studies aiming at a more insightful characterization of shared and distinct mechanisms for speech and music in the different components of STM, namely encoding, retention, and retrieval. © 2018 New York Academy of Sciences.
Kabella, Danielle M; Flynn, Lucinda; Peters, Amanda; Kodituwakku, Piyadasa; Stephen, Julia M
2018-05-24
Prior studies indicate that the auditory mismatch response is sensitive to early alterations in brain development in multiple developmental disorders. Prenatal alcohol exposure is known to impact early auditory processing. The current study hypothesized alterations in the mismatch response in young children with fetal alcohol spectrum disorders (FASD). Participants in this study were 9 children with a FASD and 17 control children (Control) aged 3 to 6 years. Participants underwent magnetoencephalography and structural magnetic resonance imaging scans separately. We compared groups on neurophysiological mismatch negativity (MMN) responses to auditory stimuli measured using the auditory oddball paradigm. Frequent (1,000 Hz) and rare (1,200 Hz) tones were presented at 72 dB. There was no significant group difference in MMN response latency or amplitude represented by the peak located ~200 ms after stimulus presentation in the difference time course between frequent and infrequent tones. Examining the time courses to the frequent and infrequent tones separately, repeated measures analysis of variance with condition (frequent vs. rare), peak (N100m and N200m), and hemisphere as within-subject factors and diagnosis and sex as the between-subject factors showed a significant interaction of peak by diagnosis (p = 0.001), with a pattern of decreased amplitude from N100m to N200m in Control children and the opposite pattern in children with FASD. However, no significant difference was found with the simple effects comparisons. No group differences were found in the response latencies of the rare auditory evoked fields. The results indicate that there was no detectable effect of alcohol exposure on the amplitude or latency of the MMNm response to simple tones modulated by frequency change in preschool-aged children with FASD. However, while discrimination abilities to simple tones may be intact, early auditory sensory processing revealed by the interaction between N100m and N200m amplitude indicates that auditory sensory processing may be altered in children with FASD. Copyright © 2018 by the Research Society on Alcoholism.
Cerebral processing of auditory stimuli in patients with irritable bowel syndrome
Andresen, Viola; Poellinger, Alexander; Tsrouya, Chedwa; Bach, Dominik; Stroh, Albrecht; Foerschler, Annette; Georgiewa, Petra; Schmidtmann, Marco; van der Voort, Ivo R; Kobelt, Peter; Zimmer, Claus; Wiedenmann, Bertram; Klapp, Burghard F; Monnikes, Hubert
2006-01-01
AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms, we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality. METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans. Statistical maps were constructed using the general linear model. RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds. To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations. CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affective reactivity, possibly associated with the psychological comorbidity often found in IBS patients. PMID:16586541
The harmonic organization of auditory cortex.
Wang, Xiaoqin
2013-12-17
A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.
Ultrasound Produces Extensive Brain Activation via a Cochlear Pathway.
Guo, Hongsun; Hamilton, Mark; Offutt, Sarah J; Gloeckner, Cory D; Li, Tianqi; Kim, Yohan; Legon, Wynn; Alford, Jamu K; Lim, Hubert H
2018-06-06
Ultrasound (US) can noninvasively activate intact brain circuits, making it a promising neuromodulation technique. However, little is known about the underlying mechanism. Here, we apply transcranial US and perform brain mapping studies in guinea pigs using extracellular electrophysiology. We find that US elicits extensive activation across cortical and subcortical brain regions. However, transection of the auditory nerves or removal of cochlear fluids eliminates the US-induced activity, revealing an indirect auditory mechanism for US neural activation. Our findings indicate that US activates the ascending auditory system through a cochlear pathway, which can activate other non-auditory regions through cross-modal projections. This cochlear pathway mechanism challenges the idea that US can directly activate neurons in the intact brain, suggesting that future US stimulation studies will need to control for this effect to reach reliable conclusions. Copyright © 2018 Elsevier Inc. All rights reserved.
Prospects for Replacement of Auditory Neurons by Stem Cells
Shi, Fuxin; Edge, Albert S.B.
2013-01-01
Sensorineural hearing loss is caused by degeneration of hair cells or auditory neurons. Spiral ganglion cells, the primary afferent neurons of the auditory system, are patterned during development and send out projections to hair cells and to the brainstem under the control of largely unknown guidance molecules. The neurons do not regenerate after loss and even damage to their projections tends to be permanent. The genesis of spiral ganglion neurons and their synapses forms a basis for regenerative approaches. In this review we critically present the current experimental findings on auditory neuron replacement. We discuss the latest advances with a focus on (a) exogenous stem cell transplantation into the cochlea for neural replacement, (b) expression of local guidance signals in the cochlea after loss of auditory neurons, (c) the possibility of neural replacement from an endogenous cell source, and (d) functional changes from cell engraftment. PMID:23370457
Auditory-vocal mirroring in songbirds.
Mooney, Richard
2014-01-01
Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.
Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J
2016-01-28
Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Source Space Estimation of Oscillatory Power and Brain Connectivity in Tinnitus
Zobay, Oliver; Palmer, Alan R.; Hall, Deborah A.; Sereda, Magdalena; Adjamian, Peyman
2015-01-01
Tinnitus is the perception of an internally generated sound that is postulated to emerge as a result of structural and functional changes in the brain. However, the precise pathophysiology of tinnitus remains unknown. Llinas’ thalamocortical dysrhythmia model suggests that neural deafferentation due to hearing loss causes a dysregulation of coherent activity between thalamus and auditory cortex. This leads to a pathological coupling of theta and gamma oscillatory activity in the resting state, localised to the auditory cortex where normally alpha oscillations should occur. Numerous studies also suggest that tinnitus perception relies on the interplay between auditory and non-auditory brain areas. According to the Global Brain Model, a network of global fronto—parietal—cingulate areas is important in the generation and maintenance of the conscious perception of tinnitus. Thus, the distress experienced by many individuals with tinnitus is related to the top—down influence of this global network on auditory areas. In this magnetoencephalographic study, we compare resting-state oscillatory activity of tinnitus participants and normal-hearing controls to examine effects on spectral power as well as functional and effective connectivity. The analysis is based on beamformer source projection and an atlas-based region-of-interest approach. We find increased functional connectivity within the auditory cortices in the alpha band. A significant increase is also found for the effective connectivity from a global brain network to the auditory cortices in the alpha and beta bands. We do not find evidence of effects on spectral power. Overall, our results provide only limited support for the thalamocortical dysrhythmia and Global Brain models of tinnitus. PMID:25799178
A prediction of templates in the auditory cortex system
NASA Astrophysics Data System (ADS)
Ghanbeigi, Kimia
In this study variation of human auditory evoked mismatch field amplitudes in response to complex tones as a function of the removal in single partials in the onset period was investigated. It was determined: 1-A single frequency elimination in a sound stimulus plays a significant role in human brain sound recognition. 2-By comparing the mismatches of the brain response due to a single frequency elimination in the "Starting Transient" and "Sustain Part" of the sound stimulus, it is found that the brain is more sensitive to frequency elimination in the Starting Transient. This study involves 4 healthy subjects with normal hearing. Neural activity was recorded with stimulus whole-head MEG. Verification of spatial location in the auditory cortex was determined by comparing with MRI images. In the first set of stimuli, repetitive ('standard') tones with five selected onset frequencies were randomly embedded in the string of rare ('deviant') tones with randomly varying inter stimulus intervals. In the deviant tones one of the frequency components was omitted relative to the deviant tones during the onset period. The frequency of the test partial of the complex tone was intentionally selected to preclude its reinsertion by generation of harmonics or combination tones due to either the nonlinearity of the ear, the electronic equipment or the brain processing. In the second set of stimuli, time structured as above, repetitive ('standard') tones with five selected sustained frequency components were embedded in the string of rare '(deviant') tones for which one of these selected frequencies was omitted in the sustained tone. In both measurements, the carefully frequency selection precluded their reinsertion by generation of harmonics or combination tones due to the nonlinearity of the ear, the electronic equipment and brain processing. The same considerations for selecting the test frequency partial were applied. Results. By comparing MMN of the two data sets, the relative contribution to sound recognition of the omitted partial frequency components in the onset and sustained regions has been determined. Conclusion. The presence of significant mismatch negativity, due to neural activity of auditory cortex, emphasizes that the brain recognizes the elimination of a single frequency of carefully chosen anharmonic frequencies. It was shown this mismatch is more significant if the single frequency elimination occurs in the onset period.
Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang
2015-01-01
Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.
Shera, Christopher A.; Melcher, Jennifer R.
2014-01-01
Atypical medial olivocochlear (MOC) feedback from brain stem to cochlea has been proposed to play a role in tinnitus, but even well-constructed tests of this idea have yielded inconsistent results. In the present study, it was hypothesized that low sound tolerance (mild to moderate hyperacusis), which can accompany tinnitus or occur on its own, might contribute to the inconsistency. Sound-level tolerance (SLT) was assessed in subjects (all men) with clinically normal or near-normal thresholds to form threshold-, age-, and sex-matched groups: 1) no tinnitus/high SLT, 2) no tinnitus/low SLT, 3) tinnitus/high SLT, and 4) tinnitus/low SLT. MOC function was measured from the ear canal as the change in magnitude of distortion-product otoacoustic emissions (DPOAE) elicited by broadband noise presented to the contralateral ear. The noise reduced DPOAE magnitude in all groups (“contralateral suppression”), but significantly more reduction occurred in groups with tinnitus and/or low SLT, indicating hyperresponsiveness of the MOC system compared with the group with no tinnitus/high SLT. The results suggest hyperresponsiveness of the interneurons of the MOC system residing in the cochlear nucleus and/or MOC neurons themselves. The present data, combined with previous human and animal data, indicate that neural pathways involving every major division of the cochlear nucleus manifest hyperactivity and/or hyperresponsiveness in tinnitus and/or low SLT. The overactivation may develop in each pathway separately. However, a more parsimonious hypothesis is that top-down neuromodulation is the driving force behind ubiquitous overactivation of the auditory brain stem and may correspond to attentional spotlighting on the auditory domain in tinnitus and hyperacusis. PMID:25231612
How do auditory cortex neurons represent communication sounds?
Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc
2013-11-01
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Representation of particle motion in the auditory midbrain of a developing anuran.
Simmons, Andrea Megela
2015-07-01
In bullfrog tadpoles, a "deaf period" of lessened responsiveness to the pressure component of sounds, evident during the end of the late larval period, has been identified in the auditory midbrain. But coding of underwater particle motion in the vestibular medulla remains stable over all of larval development, with no evidence of a "deaf period." Neural coding of particle motion in the auditory midbrain was assessed to determine if a "deaf period" for this mode of stimulation exists in this brain area in spite of its absence from the vestibular medulla. Recording sites throughout the developing laminar and medial principal nuclei show relatively stable thresholds to z-axis particle motion, up until the "deaf period." Thresholds then begin to increase from this point up through the rest of metamorphic climax, and significantly fewer responsive sites can be located. The representation of particle motion in the auditory midbrain is less robust during later compared to earlier larval stages, overlapping with but also extending beyond the restricted "deaf period" for pressure stimulation. The decreased functional representation of particle motion in the auditory midbrain throughout metamorphic climax may reflect ongoing neural reorganization required to mediate the transition from underwater to amphibious life.
Bierer, Julie Arenberg; Faulkner, Kathleen F; Tremblay, Kelly L
2011-01-01
The goal of this study was to compare cochlear implant behavioral measures and electrically evoked auditory brain stem responses (EABRs) obtained with a spatially focused electrode configuration. It has been shown previously that channels with high thresholds, when measured with the tripolar configuration, exhibit relatively broad psychophysical tuning curves. The elevated threshold and degraded spatial/spectral selectivity of such channels are consistent with a poor electrode-neuron interface, defined as suboptimal electrode placement or reduced nerve survival. However, the psychophysical methods required to obtain these data are time intensive and may not be practical during a clinical mapping session, especially for young children. Here, we have extended the previous investigation to determine whether a physiological approach could provide a similar assessment of channel functionality. We hypothesized that, in accordance with the perceptual measures, higher EABR thresholds would correlate with steeper EABR amplitude growth functions, reflecting a degraded electrode-neuron interface. Data were collected from six cochlear implant listeners implanted with the HiRes 90k cochlear implant (Advanced Bionics). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the partial tripolar configuration, for which a fraction of current (σ) from a center active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. EABRs were obtained in each subject for the two channels having the highest and lowest tripolar (σ = 1 or 0.9) behavioral threshold. Evoked potentials were measured with both the monopolar (σ = 0) and a more focused partial tripolar (σ ≥ 0.50) configuration. Consistent with previous studies, EABR thresholds were highly and positively correlated with behavioral thresholds obtained with both the monopolar and partial tripolar configurations. The Wave V amplitude growth functions with increasing stimulus level showed the predicted effect of shallower growth for the partial tripolar than for the monopolar configuration, but this was observed only for the low-threshold channels. In contrast, high-threshold channels showed the opposite effect; steeper growth functions were seen for the partial tripolar configuration. These results suggest that behavioral thresholds or EABRs measured with a restricted stimulus can be used to identify potentially impaired cochlear implant channels. Channels having high thresholds and steep growth functions would likely not activate the appropriate spatially restricted region of the cochlea, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.
Metabotropic glutamate receptors in auditory processing
Lu, Yong
2014-01-01
As the major excitatory neurotransmitter used in the vertebrate brain, glutamate activates ionotropic and metabotropic glutamate receptors (mGluRs), which mediate fast and slow neuronal actions, respectively. Important modulatory roles of mGluRs have been shown in many brain areas, and drugs targeting mGluRs have been developed for treatment of brain disorders. Here, I review the studies on mGluRs in the auditory system. Anatomical expression of mGluRs in the cochlear nucleus has been well characterized, while data for other auditory nuclei await more systematic investigations at both the light and electron microscopy levels. The physiology of mGluRs has been extensively studied using in vitro brain slice preparations, with a focus on the lower auditory brainstem in both mammals and birds. These in vitro physiological studies have revealed that mGluRs participate in neurotransmission, regulate ionic homeostasis, induce synaptic plasticity, and maintain the balance between excitation and inhibition in a variety of auditory structures. However, very few in vivo physiological studies on mGluRs in auditory processing have been undertaken at the systems level. Many questions regarding the essential roles of mGluRs in auditory processing still remain unanswered and more rigorous basic research is warranted. PMID:24909898
Connectivity in the human brain dissociates entropy and complexity of auditory inputs.
Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri
2015-03-01
Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.
Stabile, Frank A.; Carson, Richard E.
2017-01-01
Although there is growing evidence that estradiol modulates female perception of male sexual signals, relatively little research has focused on female auditory processing. We used in vivo 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) imaging to examine the neuronal effects of estradiol and conspecific song in female house sparrows (Passer domesticus). We assessed brain glucose metabolism, a measure of neuronal activity, in females with empty implants, estradiol implants, and empty implants ~1 month after estradiol implant removal. Females were exposed to conspecific or heterospecific songs immediately prior to imaging. The activity of brain regions involved in auditory perception did not differ between females with empty implants exposed to conspecific vs. heterospecific song, but neuronal activity was significantly reduced in females with estradiol implants exposed to heterospecific song. Furthermore, our within-individual design revealed that changes in brain activity due to high estradiol were actually greater several weeks after peak hormone exposure. Overall, this study demonstrates that PET imaging is a powerful tool for assessing large-scale changes in brain activity in living songbirds, and suggests that after breeding is done, specific environmental and physiological cues are necessary for estradiol-stimulated females to lose the selectivity they display in neural response to conspecific song. PMID:28832614
Selective attention to temporal features on nested time scales.
Henry, Molly J; Herrmann, Björn; Obleser, Jonas
2015-02-01
Meaningful auditory stimuli such as speech and music often vary simultaneously along multiple time scales. Thus, listeners must selectively attend to, and selectively ignore, separate but intertwined temporal features. The current study aimed to identify and characterize the neural network specifically involved in this feature-selective attention to time. We used a novel paradigm where listeners judged either the duration or modulation rate of auditory stimuli, and in which the stimulation, working memory demands, response requirements, and task difficulty were held constant. A first analysis identified all brain regions where individual brain activation patterns were correlated with individual behavioral performance patterns, which thus supported temporal judgments generically. A second analysis then isolated those brain regions that specifically regulated selective attention to temporal features: Neural responses in a bilateral fronto-parietal network including insular cortex and basal ganglia decreased with degree of change of the attended temporal feature. Critically, response patterns in these regions were inverted when the task required selectively ignoring this feature. The results demonstrate how the neural analysis of complex acoustic stimuli with multiple temporal features depends on a fronto-parietal network that simultaneously regulates the selective gain for attended and ignored temporal features. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Muenssinger, Jana; Stingl, Krunoslav T.; Matuz, Tamara; Binder, Gerhard; Ehehalt, Stefan; Preissl, Hubert
2013-01-01
Habituation—the response decrement to repetitively presented stimulation—is a basic cognitive capability and suited to investigate development and integrity of the human brain. To evaluate the developmental process of auditory habituation, the current study used magnetoencephalography (MEG) to investigate auditory habituation, dishabituation and stimulus specificity in children and adults and compared the results between age groups. Twenty-nine children (Mage = 9.69 years, SD ± 0.47) and 14 adults (Mage = 29.29 years, SD ± 3.47) participated in the study and passively listened to a habituation paradigm consisting of 100 trains of tones which were composed of five 500 Hz tones, one 750 Hz tone (dishabituator) and another two 500 Hz tones, respectively while focusing their attention on a silent movie. Adults showed the expected habituation and stimulus specificity within-trains while no response decrement was found between trains. Sensory adaptation or fatigue as a source for response decrement in adults is unlikely due to the strong reaction to the dishabituator (stimulus specificity) and strong mismatch negativity (MMN) responses. However, in children neither habituation nor dishabituation or stimulus specificity could be found within-trains, response decrement was found across trains. It can be speculated that the differences between children and adults are linked to differences in stimulus processing due to attentional processes. This study shows developmental differences in task-related brain activation and discusses the possible influence of broader concepts such as attention, which should be taken into account when comparing performance in an identical task between age groups. PMID:23882207
Human-like brain hemispheric dominance in birdsong learning
Moorman, Sanne; Gobes, Sharon M. H.; Kuijpers, Maaike; Kerkhofs, Amber; Zandbergen, Matthijs A.; Bolhuis, Johan J.
2012-01-01
Unlike nonhuman primates, songbirds learn to vocalize very much like human infants acquire spoken language. In humans, Broca’s area in the frontal lobe and Wernicke’s area in the temporal lobe are crucially involved in speech production and perception, respectively. Songbirds have analogous brain regions that show a similar neural dissociation between vocal production and auditory perception and memory. In both humans and songbirds, there is evidence for lateralization of neural responsiveness in these brain regions. Human infants already show left-sided dominance in their brain activation when exposed to speech. Moreover, a memory-specific left-sided dominance in Wernicke’s area for speech perception has been demonstrated in 2.5-mo-old babies. It is possible that auditory-vocal learning is associated with hemispheric dominance and that this association arose in songbirds and humans through convergent evolution. Therefore, we investigated whether there is similar song memory-related lateralization in the songbird brain. We exposed male zebra finches to tutor or unfamiliar song. We found left-sided dominance of neuronal activation in a Broca-like brain region (HVC, a letter-based name) of juvenile and adult zebra finch males, independent of the song stimulus presented. In addition, juvenile males showed left-sided dominance for tutor song but not for unfamiliar song in a Wernicke-like brain region (the caudomedial nidopallium). Thus, left-sided dominance in the caudomedial nidopallium was specific for the song-learning phase and was memory-related. These findings demonstrate a remarkable neural parallel between birdsong and human spoken language, and they have important consequences for our understanding of the evolution of auditory-vocal learning and its neural mechanisms. PMID:22802637
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli
2015-01-01
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858
Canopoli, Alessandro; Herbst, Joshua A; Hahnloser, Richard H R
2014-05-14
Many animals exhibit flexible behaviors that they can adjust to increase reward or avoid harm (learning by positive or aversive reinforcement). But what neural mechanisms allow them to restore their original behavior (motor program) after reinforcement is withdrawn? One possibility is that motor restoration relies on brain areas that have a role in memorization but no role in either motor production or in sensory processing relevant for expressing the behavior and its refinement. We investigated the role of a higher auditory brain area in the songbird for modifying and restoring the stereotyped adult song. We exposed zebra finches to aversively reinforcing white noise stimuli contingent on the pitch of one of their stereotyped song syllables. In response, birds significantly changed the pitch of that syllable to avoid the aversive reinforcer. After we withdrew reinforcement, birds recovered their original song within a few days. However, we found that large bilateral lesions in the caudal medial nidopallium (NCM, a high auditory area) impaired recovery of the original pitch even several weeks after withdrawal of the reinforcing stimuli. Because NCM lesions spared both successful noise-avoidance behavior and birds' auditory discrimination ability, our results show that NCM is not needed for directed motor changes or for auditory discriminative processing, but is implied in memorizing or recalling the memory of the recent song target. Copyright © 2014 the authors 0270-6474/14/347018-09$15.00/0.
Hearing faces: how the infant brain matches the face it sees with the speech it hears.
Bristow, Davina; Dehaene-Lambertz, Ghislaine; Mattout, Jeremie; Soares, Catherine; Gliga, Teodora; Baillet, Sylvain; Mangin, Jean-François
2009-05-01
Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory-visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on.
Dissociable meta-analytic brain networks contribute to coordinated emotional processing.
Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R
2018-06-01
Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.
Constructing Noise-Invariant Representations of Sound in the Auditory Pathway
Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.
2013-01-01
Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596
Cytokine Immunopathogenesis of Enterovirus 71 Brain Stem Encephalitis
Wang, Shih-Min; Lei, Huan-Yao; Liu, Ching-Chuan
2012-01-01
Enterovirus 71 (EV71) is one of the most important causes of herpangina and hand, foot, and mouth disease. It can also cause severe complications of the central nervous system (CNS). Brain stem encephalitis with pulmonary edema is the severe complication that can lead to death. EV71 replicates in leukocytes, endothelial cells, and dendritic cells resulting in the production of immune and inflammatory mediators that shape innate and acquired immune responses and the complications of disease. Cytokines, as a part of innate immunity, favor the development of antiviral and Th1 immune responses. Cytokines and chemokines play an important role in the pathogenesis EV71 brain stem encephalitis. Both the CNS and the systemic inflammatory responses to infection play important, but distinctly different, roles in the pathogenesis of EV71 pulmonary edema. Administration of intravenous immunoglobulin and milrinone, a phosphodiesterase inhibitor, has been shown to modulate inflammation, to reduce sympathetic overactivity, and to improve survival in patients with EV71 autonomic nervous system dysregulation and pulmonary edema. PMID:22956971
Reversing pathological neural activity using targeted plasticity.
Engineer, Navzer D; Riley, Jonathan R; Seale, Jonathan D; Vrana, Will A; Shetake, Jai A; Sudanagunta, Sindhu P; Borland, Michael S; Kilgard, Michael P
2011-02-03
Brain changes in response to nerve damage or cochlear trauma can generate pathological neural activity that is believed to be responsible for many types of chronic pain and tinnitus. Several studies have reported that the severity of chronic pain and tinnitus is correlated with the degree of map reorganization in somatosensory and auditory cortex, respectively. Direct electrical or transcranial magnetic stimulation of sensory cortex can temporarily disrupt these phantom sensations. However, there is as yet no direct evidence for a causal role of plasticity in the generation of pain or tinnitus. Here we report evidence that reversing the brain changes responsible can eliminate the perceptual impairment in an animal model of noise-induced tinnitus. Exposure to intense noise degrades the frequency tuning of auditory cortex neurons and increases cortical synchronization. Repeatedly pairing tones with brief pulses of vagus nerve stimulation completely eliminated the physiological and behavioural correlates of tinnitus in noise-exposed rats. These improvements persisted for weeks after the end of therapy. This method for restoring neural activity to normal may be applicable to a variety of neurological disorders.
Reversing pathological neural activity using targeted plasticity
Engineer, Navzer D.; Riley, Jonathan R.; Seale, Jonathan D.; Vrana, Will A.; Shetake, Jai A.; Sudanagunta, Sindhu P.; Borland, Michael S.; Kilgard, Michael P.
2012-01-01
Brain changes in response to nerve damage or cochlear trauma can generate pathological neural activity that is believed to be responsible for many types of chronic pain and tinnitus1–3. Several studies have reported that the severity of chronic pain and tinnitus is correlated with the degree of map reorganization in somatosensory and auditory cortex, respectively1,4. Direct electrical or transcranial magnetic stimulation of sensory cortex can temporarily disrupt these phantom sensations5. However, there is as yet no direct evidence for a causal role of plasticity in the generation of pain or tinnitus. Here we report evidence that reversing the brain changes responsible can eliminate the perceptual impairment in an animal model of noise-induced tinnitus. Exposure to intense noise degrades the frequency tuning of auditory cortex neurons and increases cortical synchronization. Repeatedly pairing tones with brief pulses of vagus nerve stimulation completely eliminated the physiological and behavioural correlates of tinnitus in noise-exposed rats. These improvements persisted for weeks after the end of therapy. This method for restoring neural activity to normal may be applicable to a variety of neurological disorders. PMID:21228773
Smoking modulates language lateralization in a sex-specific way.
Hahn, Constanze; Pogun, Sakire; Güntürkün, Onur
2010-12-01
Smoking affects a widespread network of neuronal functions by altering the properties of acetylcholinergic transmission. Recent studies show that nicotine consumption affects ascending auditory pathways and alters auditory attention, particularly in men. Here we show that smoking affects language lateralization in a sex-specific way. We assessed brain asymmetries of 90 healthy, right-handed participants using a classic consonant-vowel syllable dichotic listening paradigm in a 2×3 experimental design with sex (male, female) and smoking status (non-smoker, light smoker, heavy smoker) as between-subject factors. Our results revealed that male smokers had a significantly less lateralized response pattern compared to the other groups due to a decreased response rate of their right ear. This finding suggests a group-specific impairment of the speech dominant left hemisphere. In addition, decreased overall response accuracy was observed in male smokers compared to the other experimental groups. Similar adverse effects of smoking were not detected in women. Further, a significant negative correlation was detected between the severity of nicotine dependency and response accuracy in male but not in female smokers. Taken together, these results show that smoking modulates functional brain lateralization significantly and in a sexually dimorphic manner. Given that some psychiatric disorders have been associated with altered brain asymmetries and increased smoking prevalence, nicotinergic effects need to be specifically investigated in this context in future studies. Copyright © 2010 Elsevier Ltd. All rights reserved.
Weisz, Nathan; Obleser, Jonas
2014-01-01
Human magneto- and electroencephalography (M/EEG) are capable of tracking brain activity at millisecond temporal resolution in an entirely non-invasive manner, a feature that offers unique opportunities to uncover the spatiotemporal dynamics of the hearing brain. In general, precise synchronisation of neural activity within as well as across distributed regions is likely to subserve any cognitive process, with auditory cognition being no exception. Brain oscillations, in a range of frequencies, are a putative hallmark of this synchronisation process. Embedded in a larger effort to relate human cognition to brain oscillations, a field of research is emerging on how synchronisation within, as well as between, brain regions may shape auditory cognition. Combined with much improved source localisation and connectivity techniques, it has become possible to study directly the neural activity of auditory cortex with unprecedented spatio-temporal fidelity and to uncover frequency-specific long-range connectivities across the human cerebral cortex. In the present review, we will summarise recent contributions mainly of our laboratories to this emerging domain. We present (1) a more general introduction on how to study local as well as interareal synchronisation in human M/EEG; (2) how these networks may subserve and influence illusory auditory perception (clinical and non-clinical) and (3) auditory selective attention; and (4) how oscillatory networks further reflect and impact on speech comprehension. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Sounds and silence: An optical topography study of language recognition at birth
NASA Astrophysics Data System (ADS)
Peña, Marcela; Maki, Atsushi; Kovaic, Damir; Dehaene-Lambertz, Ghislaine; Koizumi, Hideaki; Bouquet, Furio; Mehler, Jacques
2003-09-01
Does the neonate's brain have left hemisphere (LH) dominance for speech? Twelve full-term neonates participated in an optical topography study designed to assess whether the neonate brain responds specifically to linguistic stimuli. Participants were tested with normal infant-directed speech, with the same utterances played in reverse and without auditory stimulation. We used a 24-channel optical topography device to assess changes in the concentration of total hemoglobin in response to auditory stimulation in 12 areas of the right hemisphere and 12 areas of the LH. We found that LH temporal areas showed significantly more activation when infants were exposed to normal speech than to backward speech or silence. We conclude that neonates are born with an LH superiority to process specific properties of speech.
Opposite brain laterality in analogous auditory and visual tests.
Oltedal, Leif; Hugdahl, Kenneth
2017-11-01
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie
2018-01-01
Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.
Boumans, Tiny; Gobes, Sharon M. H.; Poirier, Colline; Theunissen, Frederic E.; Vandersmissen, Liesbeth; Pintjens, Wouter; Verhoye, Marleen; Bolhuis, Johan J.; Van der Linden, Annemie
2008-01-01
Background Male songbirds learn their songs from an adult tutor when they are young. A network of brain nuclei known as the ‘song system’ is the likely neural substrate for sensorimotor learning and production of song, but the neural networks involved in processing the auditory feedback signals necessary for song learning and maintenance remain unknown. Determining which regions show preferential responsiveness to the bird's own song (BOS) is of great importance because neurons sensitive to self-generated vocalisations could mediate this auditory feedback process. Neurons in the song nuclei and in a secondary auditory area, the caudal medial mesopallium (CMM), show selective responses to the BOS. The aim of the present study is to investigate the emergence of BOS selectivity within the network of primary auditory sub-regions in the avian pallium. Methods and Findings Using blood oxygen level-dependent (BOLD) fMRI, we investigated neural responsiveness to natural and manipulated self-generated vocalisations and compared the selectivity for BOS and conspecific song in different sub-regions of the thalamo-recipient area Field L. Zebra finch males were exposed to conspecific song, BOS and to synthetic variations on BOS that differed in spectro-temporal and/or modulation phase structure. We found significant differences in the strength of BOLD responses between regions L2a, L2b and CMM, but no inter-stimuli differences within regions. In particular, we have shown that the overall signal strength to song and synthetic variations thereof was different within two sub-regions of Field L2: zone L2a was significantly more activated compared to the adjacent sub-region L2b. Conclusions Based on our results we suggest that unlike nuclei in the song system, sub-regions in the primary auditory pallium do not show selectivity for the BOS, but appear to show different levels of activity with exposure to any sound according to their place in the auditory processing stream. PMID:18781203
Pawlisch, Benjamin A.; Remage-Healey, Luke
2014-01-01
Neuromodulators rapidly alter activity of neural circuits and can therefore shape higher-order functions, such as sensorimotor integration. Increasing evidence suggests that brain-derived estrogens, such as 17-β-estradiol, can act rapidly to modulate sensory processing. However, less is known about how rapid estrogen signaling can impact downstream circuits. Past studies have demonstrated that estradiol levels increase within the songbird auditory cortex (the caudomedial nidopallium, NCM) during social interactions. Local estradiol signaling enhances the auditory-evoked firing rate of neurons in NCM to a variety of stimuli, while also enhancing the selectivity of auditory-evoked responses of neurons in a downstream sensorimotor brain region, HVC (proper name). Since these two brain regions are not directly connected, we employed dual extracellular recordings in HVC and the upstream nucleus interfacialis of the nidopallium (NIf) during manipulations of estradiol within NCM to better understand the pathway by which estradiol signaling propagates to downstream circuits. NIf has direct input into HVC, passing auditory information into the vocal motor output pathway, and is a possible source of the neural selectivity within HVC. Here, during acute estradiol administration in NCM, NIf neurons showed increases in baseline firing rates and auditory-evoked firing rates to all stimuli. Furthermore, when estradiol synthesis was blocked in NCM, we observed simultaneous decreases in the selectivity of NIf and HVC neurons. These effects were not due to direct estradiol actions because NIf has little to no capability for local estrogen synthesis or estrogen receptors, and these effects were specific to NIf because other neurons immediately surrounding NIf did not show these changes. Our results demonstrate that transsynaptic, rapid fluctuations in neuroestrogens are transmitted into NIf and subsequently HVC, both regions important for sensorimotor integration. Overall, these findings support the hypothesis that acute neurosteroid actions can propagate within and between neural circuits to modulate their functional connectivity. PMID:25453773
Pawlisch, B A; Remage-Healey, L
2015-01-22
Neuromodulators rapidly alter activity of neural circuits and can therefore shape higher order functions, such as sensorimotor integration. Increasing evidence suggests that brain-derived estrogens, such as 17-β-estradiol, can act rapidly to modulate sensory processing. However, less is known about how rapid estrogen signaling can impact downstream circuits. Past studies have demonstrated that estradiol levels increase within the songbird auditory cortex (the caudomedial nidopallium, NCM) during social interactions. Local estradiol signaling enhances the auditory-evoked firing rate of neurons in NCM to a variety of stimuli, while also enhancing the selectivity of auditory-evoked responses of neurons in a downstream sensorimotor brain region, HVC (proper name). Since these two brain regions are not directly connected, we employed dual extracellular recordings in HVC and the upstream nucleus interfacialis of the nidopallium (NIf) during manipulations of estradiol within NCM to better understand the pathway by which estradiol signaling propagates to downstream circuits. NIf has direct input into HVC, passing auditory information into the vocal motor output pathway, and is a possible source of the neural selectivity within HVC. Here, during acute estradiol administration in NCM, NIf neurons showed increases in baseline firing rates and auditory-evoked firing rates to all stimuli. Furthermore, when estradiol synthesis was blocked in NCM, we observed simultaneous decreases in the selectivity of NIf and HVC neurons. These effects were not due to direct estradiol actions because NIf has little to no capability for local estrogen synthesis or estrogen receptors, and these effects were specific to NIf because other neurons immediately surrounding NIf did not show these changes. Our results demonstrate that transsynaptic, rapid fluctuations in neuroestrogens are transmitted into NIf and subsequently HVC, both regions important for sensorimotor integration. Overall, these findings support the hypothesis that acute neurosteroid actions can propagate within and between neural circuits to modulate their functional connectivity. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin
2012-01-01
Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.
Synaptic integration in dendrites: exceptional need for speed
Golding, Nace L; Oertel, Donata
2012-01-01
Some neurons in the mammalian auditory system are able to detect and report the coincident firing of inputs with remarkable temporal precision. A strong, low-voltage-activated potassium conductance (gKL) at the cell body and dendrites gives these neurons sensitivity to the rate of depolarization by EPSPs, allowing neurons to assess the coincidence of the rising slopes of unitary EPSPs. Two groups of neurons in the brain stem, octopus cells in the posteroventral cochlear nucleus and principal cells of the medial superior olive (MSO), extract acoustic information by assessing coincident firing of their inputs over a submillisecond timescale and convey that information at rates of up to 1000 spikes s−1. Octopus cells detect the coincident activation of groups of auditory nerve fibres by broadband transient sounds, compensating for the travelling wave delay by dendritic filtering, while MSO neurons detect coincident activation of similarly tuned neurons from each of the two ears through separate dendritic tufts. Each makes use of filtering that is introduced by the spatial distribution of inputs on dendrites. PMID:22930273
Position-dependent patterning of spontaneous action potentials in immature cochlear inner hair cells
Johnson, Stuart L.; Eckrich, Tobias; Kuhn, Stephanie; Zampini, Valeria; Franz, Christoph; Ranatunga, Kishani M.; Roberts, Terri P.; Masetto, Sergio; Knipper, Marlies; Kros, Corné J.; Marcotti, Walter
2011-01-01
Spontaneous action potential activity is crucial for mammalian sensory system development. In the auditory system, patterned firing activity has been observed in immature spiral ganglion cells and brain-stem neurons and is likely to depend on cochlear inner hair cell (IHC) action potentials. It remains uncertain whether spiking activity is intrinsic to developing IHCs and whether it shows patterning. We found that action potentials are intrinsically generated by immature IHCs of altricial rodents and that apical IHCs exhibit bursting activity as opposed to more sustained firing in basal cells. We show that the efferent neurotransmitter ACh, by fine-tuning the IHC’s resting membrane potential (Vm), is crucial for the bursting pattern in apical cells. Endogenous extracellular ATP also contributes to the Vm of apical and basal IHCs by activating SK2 channels. We hypothesize that the difference in firing pattern along the cochlea instructs the tonotopic differentiation of IHCs and auditory pathway. PMID:21572434
Johnson, Stuart L; Eckrich, Tobias; Kuhn, Stephanie; Zampini, Valeria; Franz, Christoph; Ranatunga, Kishani M; Roberts, Terri P; Masetto, Sergio; Knipper, Marlies; Kros, Corné J; Marcotti, Walter
2011-06-01
Spontaneous action potential activity is crucial for mammalian sensory system development. In the auditory system, patterned firing activity has been observed in immature spiral ganglion and brain-stem neurons and is likely to depend on cochlear inner hair cell (IHC) action potentials. It remains uncertain whether spiking activity is intrinsic to developing IHCs and whether it shows patterning. We found that action potentials were intrinsically generated by immature IHCs of altricial rodents and that apical IHCs showed bursting activity as opposed to more sustained firing in basal cells. We show that the efferent neurotransmitter acetylcholine fine-tunes the IHC's resting membrane potential (V(m)), and as such is crucial for the bursting pattern in apical cells. Endogenous extracellular ATP also contributes to the V(m) of apical and basal IHCs by triggering small-conductance Ca(2+)-activated K(+) (SK2) channels. We propose that the difference in firing pattern along the cochlea instructs the tonotopic differentiation of IHCs and auditory pathway.
Reversal of age-related neural timing delays with training
Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina
2013-01-01
Neural slowing is commonly noted in older adults, with consequences for sensory, motor, and cognitive domains. One of the deleterious effects of neural slowing is impairment of temporal resolution; older adults, therefore, have reduced ability to process the rapid events that characterize speech, especially in noisy environments. Although hearing aids provide increased audibility, they cannot compensate for deficits in auditory temporal processing. Auditory training may provide a strategy to address these deficits. To that end, we evaluated the effects of auditory-based cognitive training on the temporal precision of subcortical processing of speech in noise. After training, older adults exhibited faster neural timing and experienced gains in memory, speed of processing, and speech-in-noise perception, whereas a matched control group showed no changes. Training was also associated with decreased variability of brainstem response peaks, suggesting a decrease in temporal jitter in response to a speech signal. These results demonstrate that auditory-based cognitive training can partially restore age-related deficits in temporal processing in the brain; this plasticity in turn promotes better cognitive and perceptual skills. PMID:23401541
NASA Astrophysics Data System (ADS)
Lauter, Judith
2002-05-01
Several noninvasive methods are available for studying the neural bases of human sensory-motor function, but their cost is prohibitive for many researchers and clinicians. The auditory cross section (AXS) test battery utilizes relatively inexpensive methods, yet yields data that are at least equivalent, if not superior in some applications, to those generated by more expensive technologies. The acronym emphasizes access to axes-the battery makes it possible to assess dynamic physiological relations along all three body-brain axes: rostro-caudal (afferent/efferent), dorso-ventral, and right-left, on an individually-specific basis, extending from cortex to the periphery. For auditory studies, a three-level physiological ear-to-cortex profile is generated, utilizing (1) quantitative electroencephalography (qEEG); (2) the repeated evoked potentials version of the auditory brainstem response (REPs/ABR); and (3) otoacoustic emissions (OAEs). Battery procedures will be explained, and sample data presented illustrating correlated multilevel changes in ear, voice, heart, brainstem, and cortex in response to circadian rhythms, and challenges with substances such as antihistamines and Ritalin. Potential applications for the battery include studies of central auditory processing, reading problems, hyperactivity, neural bases of voice and speech motor control, neurocardiology, individually-specific responses to medications, and the physiological bases of tinnitus, hyperacusis, and related treatments.
Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model
Nakao, Kazuhito; Nakazawa, Kazu
2014-01-01
In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691
Temporal lobe networks supporting the comprehension of spoken words.
Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius
2017-09-01
Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Kauramäki, Jaakko; Jääskeläinen, Iiro P.; Hänninen, Jarno L.; Auranen, Toni; Nummenmaa, Aapo; Lampinen, Jouko; Sams, Mikko
2012-01-01
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds. PMID:23071654
Brain stem evoked response audiometry of former drug users.
Weich, Tainara Milbradt; Tochetto, Tania Maria; Seligman, Lilian
2012-10-01
Illicit drugs are known for their deleterious effects upon the central nervous system and more specifically for how they adversely affect hearing. This study aims to analyze and compare the hearing complaints and the results of brainstem evoked response audiometry (BERA) of former drug user support group goers. This is a cross-sectional non-experimental descriptive quantitative study. The sample consisted of 17 subjects divided by their preferred drug of use. Ten individuals were placed in the marijuana group (G1) and seven in the crack/cocaine group (G2). The subjects were further divided based on how long they had been using drugs: 1 to 5 years, 6 to 10 years, and over 15 years. They were interviewed, and assessed by pure tone audiometry, acoustic impedance tests, and BERA. No statistically significant differences were found between G1 and G2 or time of drug use in absolute latencies and interpeak intervals. However, only five of the 17 individuals had BERA results with adequate results for their ages. Marijuana and crack/cocaine may cause diffuse disorders in the brainstem and compromise the transmission of auditory stimuli regardless of how long these substances are used for.
Decoding the auditory brain with canonical component analysis.
de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M; Hjortkjær, Jens; Slaney, Malcolm; Lalor, Edmund
2018-05-15
The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Auditory biological marker of concussion in children
Kraus, Nina; Thompson, Elaine C.; Krizman, Jennifer; Cook, Katherine; White-Schwoch, Travis; LaBella, Cynthia R.
2016-01-01
Concussions carry devastating potential for cognitive, neurologic, and socio-emotional disease, but no objective test reliably identifies a concussion and its severity. A variety of neurological insults compromise sound processing, particularly in complex listening environments that place high demands on brain processing. The frequency-following response captures the high computational demands of sound processing with extreme granularity and reliably reveals individual differences. We hypothesize that concussions disrupt these auditory processes, and that the frequency-following response indicates concussion occurrence and severity. Specifically, we hypothesize that concussions disrupt the processing of the fundamental frequency, a key acoustic cue for identifying and tracking sounds and talkers, and, consequently, understanding speech in noise. Here we show that children who sustained a concussion exhibit a signature neural profile. They have worse representation of the fundamental frequency, and smaller and more sluggish neural responses. Neurophysiological responses to the fundamental frequency partially recover to control levels as concussion symptoms abate, suggesting a gain in biological processing following partial recovery. Neural processing of sound correctly identifies 90% of concussion cases and clears 95% of control cases, suggesting this approach has practical potential as a scalable biological marker for sports-related concussion and other types of mild traumatic brain injuries. PMID:28005070
ERIC Educational Resources Information Center
Pugh, Kenneth R.; Landi, Nicole; Preston, Jonathan L.; Mencl, W. Einar; Austin, Alison C.; Sibley, Daragh; Fulbright, Robert K.; Seidenberg, Mark S.; Grigorenko, Elena L.; Constable, R. Todd; Molfese, Peter; Frost, Stephen J.
2013-01-01
We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of…
Top-down and bottom-up modulation of brain structures involved in auditory discrimination.
Diekhof, Esther K; Biedermann, Franziska; Ruebsamen, Rudolf; Gruber, Oliver
2009-11-10
Auditory deviancy detection comprises both automatic and voluntary processing. Here, we investigated the neural correlates of different components of the sensory discrimination process using functional magnetic resonance imaging. Subliminal auditory processing of deviant events that were not detected led to activation in left superior temporal gyrus. On the other hand, both correct detection of deviancy and false alarms activated a frontoparietal network of attentional processing and response selection, i.e. this network was activated regardless of the physical presence of deviant events. Finally, activation in the putamen, anterior cingulate and middle temporal cortex depended on factual stimulus representations and occurred only during correct deviancy detection. These results indicate that sensory discrimination may rely on dynamic bottom-up and top-down interactions.
Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation
NASA Astrophysics Data System (ADS)
Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr
2017-12-01
Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the other hand, the described setup along with the presented methodology is a considerable improvement and an extension of methods constituting the state-of-the-art in the related field. Device flexibility both with developed analysis methodology can lead to further development of diagnostic methods and provide deeper insight into information processing in the human brain.
Wiggins, Ian M; Anderson, Carly A; Kitterick, Pádraig T; Hartley, Douglas E H
2016-09-01
Functional near-infrared spectroscopy (fNIRS) is a silent, non-invasive neuroimaging technique that is potentially well suited to auditory research. However, the reliability of auditory-evoked activation measured using fNIRS is largely unknown. The present study investigated the test-retest reliability of speech-evoked fNIRS responses in normally-hearing adults. Seventeen participants underwent fNIRS imaging in two sessions separated by three months. In a block design, participants were presented with auditory speech, visual speech (silent speechreading), and audiovisual speech conditions. Optode arrays were placed bilaterally over the temporal lobes, targeting auditory brain regions. A range of established metrics was used to quantify the reproducibility of cortical activation patterns, as well as the amplitude and time course of the haemodynamic response within predefined regions of interest. The use of a signal processing algorithm designed to reduce the influence of systemic physiological signals was found to be crucial to achieving reliable detection of significant activation at the group level. For auditory speech (with or without visual cues), reliability was good to excellent at the group level, but highly variable among individuals. Temporal-lobe activation in response to visual speech was less reliable, especially in the right hemisphere. Consistent with previous reports, fNIRS reliability was improved by averaging across a small number of channels overlying a cortical region of interest. Overall, the present results confirm that fNIRS can measure speech-evoked auditory responses in adults that are highly reliable at the group level, and indicate that signal processing to reduce physiological noise may substantially improve the reliability of fNIRS measurements. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Nozaradan, Sylvie; Schönwiesner, Marc; Keller, Peter E; Lenc, Tomas; Lehmann, Alexandre
2018-02-01
The spontaneous ability to entrain to meter periodicities is central to music perception and production across cultures. There is increasing evidence that this ability involves selective neural responses to meter-related frequencies. This phenomenon has been observed in the human auditory cortex, yet it could be the product of evolutionarily older lower-level properties of brainstem auditory neurons, as suggested by recent recordings from rodent midbrain. We addressed this question by taking advantage of a new method to simultaneously record human EEG activity originating from cortical and lower-level sources, in the form of slow (< 20 Hz) and fast (> 150 Hz) responses to auditory rhythms. Cortical responses showed increased amplitudes at meter-related frequencies compared to meter-unrelated frequencies, regardless of the prominence of the meter-related frequencies in the modulation spectrum of the rhythmic inputs. In contrast, frequency-following responses showed increased amplitudes at meter-related frequencies only in rhythms with prominent meter-related frequencies in the input but not for a more complex rhythm requiring more endogenous generation of the meter. This interaction with rhythm complexity suggests that the selective enhancement of meter-related frequencies does not fully rely on subcortical auditory properties, but is critically shaped at the cortical level, possibly through functional connections between the auditory cortex and other, movement-related, brain structures. This process of temporal selection would thus enable endogenous and motor entrainment to emerge with substantial flexibility and invariance with respect to the rhythmic input in humans in contrast with non-human animals. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Analyzing pitch chroma and pitch height in the human brain.
Warren, Jason D; Uppenkamp, Stefan; Patterson, Roy D; Griffiths, Timothy D
2003-11-01
The perceptual pitch dimensions of chroma and height have distinct representations in the human brain: chroma is represented in cortical areas anterior to primary auditory cortex, whereas height is represented posterior to primary auditory cortex.
Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee
2012-09-19
Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or retrieval. To investigate this possibility, we systematically imaged the brain activity patterns in the lateral amygdala, MGm/PIN, and AuV/TeA using activity-dependent induction of immediate early gene zif268 after recent and remote memory retrieval of auditory conditioned fear. Consistent with the critical role of the amygdala in fear memory, the zif268 activity in the lateral amygdala was significantly increased after both recent and remote memory retrieval. Interesting, however, the density of zif268 (+) neurons in both MGm/PIN and AuV/TeA, particularly in layers IV and VI, was increased only after remote but not recent fear memory retrieval compared to control groups. Further analysis of zif268 signals in AuV/TeA revealed that conditioned tone induced stronger zif268 induction compared to familiar tone in each individual zif268 (+) neuron after recent memory retrieval. Taken together, our results support that the lateral amygdala is a key brain site for permanent fear memory storage and suggest that MGm/PIN and AuV/TeA might play a role for remote memory storage or retrieval of auditory conditioned fear, or, alternatively, that these auditory brain regions might have a different way of processing for familiar or conditioned tone information at recent and remote time phases.
The harmonic organization of auditory cortex
Wang, Xiaoqin
2013-01-01
A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544
Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B
2012-06-07
In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here, we used chronic microelectrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions, we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. Copyright © 2012 Elsevier Inc. All rights reserved.
Fukushima, Makoto; Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.
2012-01-01
Summary In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here we used chronic micro-electrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. PMID:22681693
Towards a truly mobile auditory brain-computer interface: exploring the P300 to take away.
De Vos, Maarten; Gandras, Katharina; Debener, Stefan
2014-01-01
In a previous study we presented a low-cost, small, and wireless 14-channel EEG system suitable for field recordings (Debener et al., 2012, psychophysiology). In the present follow-up study we investigated whether a single-trial P300 response can be reliably measured with this system, while subjects freely walk outdoors. Twenty healthy participants performed a three-class auditory oddball task, which included rare target and non-target distractor stimuli presented with equal probabilities of 16%. Data were recorded in a seated (control condition) and in a walking condition, both of which were realized outdoors. A significantly larger P300 event-related potential amplitude was evident for targets compared to distractors (p<.001), but no significant interaction with recording condition emerged. P300 single-trial analysis was performed with regularized stepwise linear discriminant analysis and revealed above chance-level classification accuracies for most participants (19 out of 20 for the seated, 16 out of 20 for the walking condition), with mean classification accuracies of 71% (seated) and 64% (walking). Moreover, the resulting information transfer rates for the seated and walking conditions were comparable to a recently published laboratory auditory brain-computer interface (BCI) study. This leads us to conclude that a truly mobile auditory BCI system is feasible. © 2013.
Rapid Effects of Hearing Song on Catecholaminergic Activity in the Songbird Auditory Pathway
Matragrano, Lisa L.; Beaulieu, Michaël; Phillip, Jessica O.; Rae, Ali I.; Sanford, Sara E.; Sockman, Keith W.; Maney, Donna L.
2012-01-01
Catecholaminergic (CA) neurons innervate sensory areas and affect the processing of sensory signals. For example, in birds, CA fibers innervate the auditory pathway at each level, including the midbrain, thalamus, and forebrain. We have shown previously that in female European starlings, CA activity in the auditory forebrain can be enhanced by exposure to attractive male song for one week. It is not known, however, whether hearing song can initiate that activity more rapidly. Here, we exposed estrogen-primed, female white-throated sparrows to conspecific male song and looked for evidence of rapid synthesis of catecholamines in auditory areas. In one hemisphere of the brain, we used immunohistochemistry to detect the phosphorylation of tyrosine hydroxylase (TH), a rate-limiting enzyme in the CA synthetic pathway. We found that immunoreactivity for TH phosphorylated at serine 40 increased dramatically in the auditory forebrain, but not the auditory thalamus and midbrain, after 15 min of song exposure. In the other hemisphere, we used high pressure liquid chromatography to measure catecholamines and their metabolites. We found that two dopamine metabolites, dihydroxyphenylacetic acid and homovanillic acid, increased in the auditory forebrain but not the auditory midbrain after 30 min of exposure to conspecific song. Our results are consistent with the hypothesis that exposure to a behaviorally relevant auditory stimulus rapidly induces CA activity, which may play a role in auditory responses. PMID:22724011
Functional significance of the electrocorticographic auditory responses in the premotor cortex.
Tanji, Kazuyo; Sakurada, Kaori; Funiu, Hayato; Matsuda, Kenichiro; Kayama, Takamasa; Ito, Sayuri; Suzuki, Kyoko
2015-01-01
Other than well-known motor activities in the precentral gyrus, functional magnetic resonance imaging (fMRI) studies have found that the ventral part of the precentral gyrus is activated in response to linguistic auditory stimuli. It has been proposed that the premotor cortex in the precentral gyrus is responsible for the comprehension of speech, but the precise function of this area is still debated because patients with frontal lesions that include the precentral gyrus do not exhibit disturbances in speech comprehension. We report on a patient who underwent resection of the tumor in the precentral gyrus with electrocorticographic recordings while she performed the verb generation task during awake brain craniotomy. Consistent with previous fMRI studies, high-gamma band auditory activity was observed in the precentral gyrus. Due to the location of the tumor, the patient underwent resection of the auditory responsive precentral area which resulted in the post-operative expression of a characteristic articulatory disturbance known as apraxia of speech (AOS). The language function of the patient was otherwise preserved and she exhibited intact comprehension of both spoken and written language. The present findings demonstrated that a lesion restricted to the ventral precentral gyrus is sufficient for the expression of AOS and suggest that the auditory-responsive area plays an important role in the execution of fluent speech rather than the comprehension of speech. These findings also confirm that the function of the premotor area is predominantly motor in nature and its sensory responses is more consistent with the "sensory theory of speech production," in which it was proposed that sensory representations are used to guide motor-articulatory processes.
The auditory neural network in man
NASA Technical Reports Server (NTRS)
Galambos, R.
1975-01-01
The principles of anatomy and physiology necessary for understanding brain wave recordings made from the scalp of normal people are briefly discussed. Brain waves evoked by sounds are described and certain of their features are related to the physical aspects of the stimulus and to the psychological state of the listener. The position is taken that data obtained through scalp probes can reveal a large amount of detail about brain functioning and that analysis of such records enable detection of the response of the nervous system to an acoustic message at the moment of its inception and to the progress of the message through the brain. Brain events responsible for distinguishing between similar signals and making decisions about them appear to generate characteristic and identifiable electrical waves. Some theoretical speculation about these data are introduced with the aim of generating a more heuristic model of the functioning brain.
Bigger Brains or Bigger Nuclei? Regulating the Size of Auditory Structures in Birds
Kubke, M. Fabiana; Massoglia, Dino P.; Carr, Catherine E.
2012-01-01
Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hind-brain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds. PMID:14726625
Entracking as a Brain Stem Code for Pitch: The Butte Hypothesis.
Joris, Philip X
2016-01-01
The basic nature of pitch is much debated. A robust code for pitch exists in the auditory nerve in the form of an across-fiber pooled interspike interval (ISI) distribution, which resembles the stimulus autocorrelation. An unsolved question is how this representation can be "read out" by the brain. A new view is proposed in which a known brain-stem property plays a key role in the coding of periodicity, which I refer to as "entracking", a contraction of "entrained phase-locking". It is proposed that a scalar rather than vector code of periodicity exists by virtue of coincidence detectors that code the dominant ISI directly into spike rate through entracking. Perfect entracking means that a neuron fires one spike per stimulus-waveform repetition period, so that firing rate equals the repetition frequency. Key properties are invariance with SPL and generalization across stimuli. The main limitation in this code is the upper limit of firing (~ 500 Hz). It is proposed that entracking provides a periodicity tag which is superimposed on a tonotopic analysis: at low SPLs and fundamental frequencies > 500 Hz, a spectral or place mechanism codes for pitch. With increasing SPL the place code degrades but entracking improves and first occurs in neurons with low thresholds for the spectral components present. The prediction is that populations of entracking neurons, extended across characteristic frequency, form plateaus ("buttes") of firing rate tied to periodicity.
What the Toadfish Ear Tells the Toadfish Brain About Sound.
Edds-Walton, Peggy L
2016-01-01
Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.
EEG Responses to Auditory Stimuli for Automatic Affect Recognition
Hettich, Dirk T.; Bolinger, Elaina; Matuz, Tamara; Birbaumer, Niels; Rosenstiel, Wolfgang; Spüler, Martin
2016-01-01
Brain state classification for communication and control has been well established in the area of brain-computer interfaces over the last decades. Recently, the passive and automatic extraction of additional information regarding the psychological state of users from neurophysiological signals has gained increased attention in the interdisciplinary field of affective computing. We investigated how well specific emotional reactions, induced by auditory stimuli, can be detected in EEG recordings. We introduce an auditory emotion induction paradigm based on the International Affective Digitized Sounds 2nd Edition (IADS-2) database also suitable for disabled individuals. Stimuli are grouped in three valence categories: unpleasant, neutral, and pleasant. Significant differences in time domain domain event-related potentials are found in the electroencephalogram (EEG) between unpleasant and neutral, as well as pleasant and neutral conditions over midline electrodes. Time domain data were classified in three binary classification problems using a linear support vector machine (SVM) classifier. We discuss three classification performance measures in the context of affective computing and outline some strategies for conducting and reporting affect classification studies. PMID:27375410
Thoughts of Death Modulate Psychophysical and Cortical Responses to Threatening Stimuli
Valentini, Elia; Koch, Katharina; Aglioti, Salvatore Maria
2014-01-01
Existential social psychology studies show that awareness of one's eventual death profoundly influences human cognition and behaviour by inducing defensive reactions against end-of-life related anxiety. Much less is known about the impact of reminders of mortality on brain activity. Therefore we explored whether reminders of mortality influence subjective ratings of intensity and threat of auditory and painful thermal stimuli and the associated electroencephalographic activity. Moreover, we explored whether personality and demographics modulate psychophysical and neural changes related to mortality salience (MS). Following MS induction, a specific increase in ratings of intensity and threat was found for both nociceptive and auditory stimuli. While MS did not have any specific effect on nociceptive and auditory evoked potentials, larger amplitude of theta oscillatory activity related to thermal nociceptive activity was found after thoughts of death were induced. MS thus exerted a top-down modulation on theta electroencephalographic oscillatory amplitude, specifically for brain activity triggered by painful thermal stimuli. This effect was higher in participants reporting higher threat perception, suggesting that inducing a death-related mind-set may have an influence on body-defence related somatosensory representations. PMID:25386905
Residual neural processing of musical sound features in adult cochlear implant users.
Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias
2014-01-01
Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants' attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients' age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. -Automatic brain responses to musical feature changes reflect the limitations of central auditory processing in adult Cochlear Implant users.-The brains of adult CI users automatically process sound features changes even when inserted in a musical context.-CI users show disrupted automatic discriminatory abilities for rhythm in the brain.-Our fast paradigm demonstrate residual musical abilities in the brains of adult CI users giving hope for their future rehabilitation.
Residual Neural Processing of Musical Sound Features in Adult Cochlear Implant Users
Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias
2014-01-01
Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. Highlights: -Automatic brain responses to musical feature changes reflect the limitations of central auditory processing in adult Cochlear Implant users.-The brains of adult CI users automatically process sound features changes even when inserted in a musical context.-CI users show disrupted automatic discriminatory abilities for rhythm in the brain.-Our fast paradigm demonstrate residual musical abilities in the brains of adult CI users giving hope for their future rehabilitation. PMID:24772074
Tesche, Claudia D; Kodituwakku, Piyadasa W; Garcia, Christopher M; Houck, Jon M
2015-01-01
Children exposed to substantial amounts of alcohol in utero display a broad range of morphological and behavioral outcomes, which are collectively referred to as fetal alcohol spectrum disorders (FASDs). Common to all children on the spectrum are cognitive and behavioral problems that reflect central nervous system dysfunction. Little is known, however, about the potential effects of variables such as sex on alcohol-induced brain damage. The goal of the current research was to utilize magnetoencephalography (MEG) to examine the effect of sex on brain dynamics in adolescents and young adults with FASD during the performance of an auditory oddball task. The stimuli were short trains of 1 kHz "standard" tone bursts (80%) randomly interleaved with 1.5 kHz "target" tone bursts (10%) and "novel" digital sounds (10%). Participants made motor responses to the target tones. Results are reported for 44 individuals (18 males and 26 females) ages 12 through 22 years. Nine males and 13 females had a diagnosis of FASD and the remainder were typically-developing age- and sex-matched controls. The main finding was widespread sex-specific differential activation of the frontal, medial and temporal cortex in adolescents with FASD compared to typically developing controls. Significant differences in evoked-response and time-frequency measures of brain dynamics were observed for all stimulus types in the auditory cortex, inferior frontal sulcus and hippocampus. These results underscore the importance of considering the influence of sex when analyzing neurophysiological data in children with FASD.
A mouse model for degeneration of the spiral ligament.
Kada, Shinpei; Nakagawa, Takayuki; Ito, Juichi
2009-06-01
Previous studies have indicated the importance of the spiral ligament (SL) in the pathogenesis of sensorineural hearing loss. The aim of this study was to establish a mouse model for SL degeneration as the basis for the development of new strategies for SL regeneration. We injected 3-nitropropionic acid (3-NP), an inhibitor of succinate dehydrogenase, at various concentrations into the posterior semicircular canal of adult C57BL/6 mice. Saline-injected animals were used as controls. Auditory function was monitored by measurements of auditory brain stem responses (ABRs). On postoperative day 14, cochlear specimens were obtained after the measurement of the endocochlear potential (EP). Animals that were injected with 5 or 10 mM 3-NP showed a massive elevation of ABR thresholds along with extensive degeneration of the cochleae. Cochleae injected with 1 mM 3-NP exhibited selective degeneration of the SL fibrocytes but alterations in EP levels and ABR thresholds were not of sufficient magnitude to allow for testing functional recovery after therapeutic interventions. Animals injected with 3 mM 3-NP showed a reduction of around 50% in the EP along with a significant loss of SL fibrocytes, although degeneration of spiral ganglion neurons and hair cells was still present in certain regions. These findings indicate that cochleae injected with 3 mM 3-NP may be useful in investigations designed to test the feasibility of new therapeutic manipulations for functional SL regeneration.
Andresen, V; Bach, D R; Poellinger, A; Tsrouya, C; Stroh, A; Foerschler, A; Georgiewa, P; Zimmer, C; Mönnikes, H
2005-12-01
Visceral hypersensitivity in irritable bowel syndrome (IBS) has been associated with altered cerebral activations in response to visceral stimuli. It is unclear whether these processing alterations are specific for visceral sensation. In this study we aimed to determine by functional magnetic resonance imaging (fMRI) whether cerebral processing of supraliminal and subliminal rectal stimuli and of auditory stimuli is altered in IBS. In eight IBS patients and eight healthy controls, fMRI activations were recorded during auditory and rectal stimulation. Intensities of rectal balloon distension were adapted to the individual threshold of first perception (IPT): subliminal (IPT -10 mmHg), liminal (IPT), or supraliminal (IPT +10 mmHg). IBS patients relative to controls responded with lower activations of the prefrontal cortex (PFC) and anterior cingulate cortex (ACC) to both subliminal and supraliminal stimulation and with higher activation of the hippocampus (HC) to supraliminal stimulation. In IBS patients, not in controls, ACC and HC were also activated by auditory stimulation. In IBS patients, decreased ACC and PFC activation with subliminal and supraliminal rectal stimuli and increased HC activation with supraliminal stimuli suggest disturbances of the associative and emotional processing of visceral sensation. Hyperreactivity to auditory stimuli suggests that altered sensory processing in IBS may not be restricted to visceral sensation.
Not lost in translation: neural responses shared across languages.
Honey, Christopher J; Thompson, Christopher R; Lerner, Yulia; Hasson, Uri
2012-10-31
How similar are the brains of listeners who hear the same content expressed in different languages? We directly compared the fMRI response time courses of English speakers and Russian speakers who listened to a real-life Russian narrative and its English translation. In the translation, we tried to preserve the content of the narrative while reducing the structural similarities across languages. The story evoked similar brain responses, invariant to the structural changes across languages, beginning just outside early auditory areas and extending through temporal, parietal, and frontal cerebral cortices. The similarity of responses across languages was nearly equal to the similarity of responses within each language group. The present results demonstrate that the human brain processes real-life information in a manner that is largely insensitive to the language in which that information is conveyed. The methods introduced here can potentially be used to quantify the transmission of meaning across cultural and linguistic boundaries.
Listening to urban soundscapes: Physiological validity of perceptual dimensions.
Irwin, Amy; Hall, Deborah A; Peters, Andrew; Plack, Christopher J
2011-02-01
Predominantly, the impact of environmental noise is measured using sound level, ignoring the influence of other factors on subjective experience. The present study tested physiological responses to natural urban soundscapes, using functional magnetic resonance imaging and vector cardiogram. City-based recordings were matched in overall sound level (71 decibel A-weighted scale), but differed on ratings of pleasantness and vibrancy. Listening to soundscapes evoked significant activity in a number of auditory brain regions. Compared with soundscapes that evoked no (neutral) emotional response, those evoking a pleasant or unpleasant emotional response engaged an additional neural circuit including the right amygdala. Ratings of vibrancy had little effect overall, and brain responses were more sensitive to pleasantness than was heart rate. A novel finding is that urban soundscapes with similar loudness can have dramatically different effects on the brain's response to the environment. Copyright © 2010 Society for Psychophysiological Research.
Haslbeck, Friederike Barbara; Bassler, Dirk
2018-01-01
Human and animal studies demonstrate that early auditory experiences influence brain development. The findings are particularly crucial following preterm birth as the plasticity of auditory regions, and cortex development are heavily dependent on the quality of auditory stimulation. Brain maturation in preterm infants may be affected among other things by the overwhelming auditory environment of the neonatal intensive care unit (NICU). Conversely, auditory deprivation, (e.g., the lack of the regular intrauterine rhythms of the maternal heartbeat and the maternal voice) may also have an impact on brain maturation. Therefore, a nurturing enrichment of the auditory environment for preterm infants is warranted. Creative music therapy (CMT) addresses these demands by offering infant-directed singing in lullaby-style that is continually adapted to the neonate's needs. The therapeutic approach is tailored to the individual developmental stage, entrained to the breathing rhythm, and adapted to the subtle expressions of the newborn. Not only the therapist and the neonate but also the parents play a role in CMT. In this article, we describe how to apply music therapy in a neonatal intensive care environment to support very preterm infants and their families. We speculate that the enriched musical experience may promote brain development and we critically discuss the available evidence in support of our assumption.
Touch activates human auditory cortex.
Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta
2006-05-01
Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.
Comparison of Infant and Adult P300 from Auditory Stimuli.
ERIC Educational Resources Information Center
McIsaac, Heather; Polich, John
1992-01-01
Recorded electroencephalographic activity of infants and adults who heard 1 unique tone in a series of 10 tones. The amplitude of event-related brain potentials in response to the unique tone was smaller, and its latency longer, for infants than for adults. Evoked potentials remained stable across trials. (BC)
Disentangling conscious from unconscious cognitive processing with event-related EEG potentials.
Rohaut, B; Naccache, L
By looking for properties of consciousness, cognitive neuroscience studies have dramatically enlarged the scope of unconscious cognitive processing. This emerging knowledge inspired the development of new approaches allowing clinicians to probe and disentangle conscious from unconscious cognitive processes in non-communicating brain-injured patients both in terms of behaviour and brain activity. This information is extremely valuable in order to improve diagnosis and prognosis in such patients both at acute and chronic settings. Reciprocally, the growing observations coming from such patients suffering from disorders of consciousness provide valuable constraints to theoretical models of consciousness. In this review we chose to illustrate these recent developments by focusing on brain signals recorded with EEG at bedside in response to auditory stimuli. More precisely, we present the respective EEG markers of unconscious and conscious processing of two classes of auditory stimuli (sounds and words). We show that in both cases, conscious access to the corresponding representation (e.g.: auditory regularity and verbal semantic content) share a similar neural signature (P3b and P600/LPC) that can be distinguished from unconscious processing occurring during an earlier stage (MMN and N400). We propose a two-stage serial model of processing and discuss how unconscious and conscious signatures can be measured at bedside providing relevant informations for both diagnosis and prognosis of consciousness recovery. These two examples emphasize how fruitful can be the bidirectional approach exploring cognition in healthy subjects and in brain-damaged patients. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Connectivity patterns during music listening: Evidence for action-based processing in musicians.
Alluri, Vinoo; Toiviainen, Petri; Burunat, Iballa; Kliuchko, Marina; Vuust, Peter; Brattico, Elvira
2017-06-01
Musical expertise is visible both in the morphology and functionality of the brain. Recent research indicates that functional integration between multi-sensory, somato-motor, default-mode (DMN), and salience (SN) networks of the brain differentiates musicians from non-musicians during resting state. Here, we aimed at determining whether brain networks differentially exchange information in musicians as opposed to non-musicians during naturalistic music listening. Whole-brain graph-theory analyses were performed on participants' fMRI responses. Group-level differences revealed that musicians' primary hubs comprised cerebral and cerebellar sensorimotor regions whereas non-musicians' dominant hubs encompassed DMN-related regions. Community structure analyses of the key hubs revealed greater integration of motor and somatosensory homunculi representing the upper limbs and torso in musicians. Furthermore, musicians who started training at an earlier age exhibited greater centrality in the auditory cortex, and areas related to top-down processes, attention, emotion, somatosensory processing, and non-verbal processing of speech. We here reveal how brain networks organize themselves in a naturalistic music listening situation wherein musicians automatically engage neural networks that are action-based while non-musicians use those that are perception-based to process an incoming auditory stream. Hum Brain Mapp 38:2955-2970, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Congenital Amusia Persists in the Developing Brain after Daily Music Listening
Mignault Goulet, Geneviève; Moreau, Patricia; Robitaille, Nicolas; Peretz, Isabelle
2012-01-01
Congenital amusia is a neurodevelopmental disorder that affects about 3% of the adult population. Adults experiencing this musical disorder in the absence of macroscopically visible brain injury are described as cases of congenital amusia under the assumption that the musical deficits have been present from birth. Here, we show that this disorder can be expressed in the developing brain. We found that (10–13 year-old) children exhibit a marked deficit in the detection of fine-grained pitch differences in both musical and acoustical context in comparison to their normally developing peers comparable in age and general intelligence. This behavioral deficit could be traced down to their abnormal P300 brain responses to the detection of subtle pitch changes. The altered pattern of electrical activity does not seem to arise from an anomalous functioning of the auditory cortex, because all early components of the brain potentials, the N100, the MMN, and the P200 appear normal. Rather, the brain and behavioral measures point to disrupted information propagation from the auditory cortex to other cortical regions. Furthermore, the behavioral and neural manifestations of the disorder remained unchanged after 4 weeks of daily musical listening. These results show that congenital amusia can be detected in childhood despite regular musical exposure and normal intellectual functioning. PMID:22606299
Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.
Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S
2013-05-01
Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.
van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.
2017-01-01
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127
Mechanism of auditory hypersensitivity in human autism using autism model rats.
Ida-Eto, Michiru; Hara, Nao; Ohkawara, Takeshi; Narita, Masaaki
2017-04-01
Auditory hypersensitivity is one of the major complications in autism spectrum disorder. The aim of this study was to investigate whether the auditory brain center is affected in autism model rats. Autism model rats were prepared by prenatal exposure to thalidomide on embryonic day 9 and 10 in pregnant rats. The superior olivary complex (SOC), a complex of auditory nuclei, was immunostained with anti-calbindin d28k antibody at postnatal day 50. In autism model rats, SOC immunoreactivity was markedly decreased. Strength of immunostaining of SOC auditory fibers was also weak in autism model rats. Surprisingly, the size of the medial nucleus of trapezoid body, a nucleus exerting inhibitory function in SOC, was significantly decreased in autism model rats. Auditory hypersensitivity may be, in part, due to impairment of inhibitory processing by the auditory brain center. © 2016 Japan Pediatric Society.
Donohue, Sarah E.; Liotti, Mario; Perez, Rick; Woldorff, Marty G.
2011-01-01
The electrophysiological correlates of conflict processing and cognitive control have been well characterized for the visual modality in paradigms such as the Stroop task. Much less is known about corresponding processes in the auditory modality. Here, electroencephalographic recordings of brain activity were measured during an auditory Stroop task, using three different forms of behavioral response (Overt verbal, Covert verbal, and Manual), that closely paralleled our previous visual-Stroop study. As expected, behavioral responses were slower and less accurate for incongruent compared to congruent trials. Neurally, incongruent trials showed an enhanced fronto-central negative-polarity wave (Ninc), similar to the N450 in visual-Stroop tasks, with similar variations as a function of behavioral response mode, but peaking ~150 ms earlier, followed by an enhanced positive posterior wave. In addition, sequential behavioral and neural effects were observed that supported the conflict-monitoring and cognitive-adjustment hypothesis. Thus, while some aspects of the conflict detection processes, such as timing, may be modality-dependent, the general mechanisms would appear to be supramodal. PMID:21964643
Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K
2018-05-01
The trends in cochlear implantation candidacy and benefit have changed rapidly in the last two decades. It is now widely accepted that early implantation leads to better postimplant outcomes. Although some generalizations can be made about postimplant auditory and language performance, neural mechanisms need to be studied to predict individual prognosis. The aim of this study was to use functional magnetic resonance imaging (fMRI) to identify preimplant neuroimaging biomarkers that predict children's postimplant auditory and language outcomes as measured by parental observation/reports. This is a pre-post correlational measures study. Twelve possible cochlear implant candidates with bilateral severe to profound hearing loss were recruited via referrals for a clinical magnetic resonance imaging to ensure structural integrity of the auditory nerve for implantation. Participants underwent cochlear implantation at a mean age of 19.4 mo. All children used the advanced combination encoder strategy (ACE, Cochlear Corporation™, Nucleus ® Freedom cochlear implants). Three participants received an implant in the right ear; one in the left ear whereas eight participants received bilateral implants. Participants' preimplant neuronal activation in response to two auditory stimuli was studied using an event-related fMRI method. Blood oxygen level dependent contrast maps were calculated for speech and noise stimuli. The general linear model was used to create z-maps. The Auditory Skills Checklist (ASC) and the SKI-HI Language Development Scale (SKI-HI LDS) were administered to the parents 2 yr after implantation. A nonparametric correlation analysis was implemented between preimplant fMRI activation and postimplant auditory and language outcomes based on ASC and SKI-HI LDS. Statistical Parametric Mapping software was used to create regression maps between fMRI activation and scores on the aforementioned tests. Regression maps were overlaid on the Imaging Research Center infant template and visualized in MRIcro. Regression maps revealed two clusters of brain activation for the speech versus silence contrast and five clusters for the noise versus silence contrast that were significantly correlated with the parental reports. These clusters included auditory and extra-auditory regions such as the middle temporal gyrus, supramarginal gyrus, precuneus, cingulate gyrus, middle frontal gyrus, subgyral, and middle occipital gyrus. Both positive and negative correlations were observed. Correlation values for the different clusters ranged from -0.90 to 0.95 and were significant at a corrected p value of <0.05. Correlations suggest that postimplant performance may be predicted by activation in specific brain regions. The results of the present study suggest that (1) fMRI can be used to identify neuroimaging biomarkers of auditory and language performance before implantation and (2) activation in certain brain regions may be predictive of postimplant auditory and language performance as measured by parental observation/reports. American Academy of Audiology.
D Chorna, Olena; L Hamm, Ellyn; Shrivastava, Hemang; Maitre, Nathalie L
2018-01-01
Atypical maturation of auditory neural processing contributes to preterm-born infants' language delays. Event-related potential (ERP) measurement of speech-sound differentiation might fill a gap in treatment-response biomarkers to auditory interventions. We evaluated whether these markers could measure treatment effects in a quasi-randomized prospective study. Hospitalized preterm infants in passive or active, suck-contingent mother's voice exposure groups were not different at baseline. Post-intervention, the active group had greater increases in/du/-/gu/differentiation in left frontal and temporal regions. Infants with brain injury had lower baseline/ba/-/ga/and/du/-/gu/differentiation than those without. ERP provides valid discriminative, responsive, and predictive biomarkers of infant speech-sound differentiation.
Temporal resolution of the Florida manatee (Trichechus manatus latirostris) auditory system.
Mann, David A; Colbert, Debborah E; Gaspard, Joseph C; Casper, Brandon M; Cook, Mandy L H; Reep, Roger L; Bauer, Gordon B
2005-10-01
Auditory evoked potential (AEP) measurements of two Florida manatees (Trichechus manatus latirostris) were measured in response to amplitude modulated tones. The AEP measurements showed weak responses to test stimuli from 4 kHz to 40 kHz. The manatee modulation rate transfer function (MRTF) is maximally sensitive to 150 and 600 Hz amplitude modulation (AM) rates. The 600 Hz AM rate is midway between the AM sensitivities of terrestrial mammals (chinchillas, gerbils, and humans) (80-150 Hz) and dolphins (1,000-1,200 Hz). Audiograms estimated from the input-output functions of the EPs greatly underestimate behavioral hearing thresholds measured in two other manatees. This underestimation is probably due to the electrodes being located several centimeters from the brain.
Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.
Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming
2017-02-01
Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.
Titania nanotube arrays as potential interfaces for neurological prostheses
NASA Astrophysics Data System (ADS)
Sorkin, Jonathan Andrew
Neural prostheses can make a dramatic improvement for those suffering from visual and auditory, cognitive, and motor control disabilities, allowing them regained functionality by the use of stimulating or recording electrical signaling. However, the longevity of these devices is limited due to the neural tissue response to the implanted device. In response to the implant penetrating the blood brain barrier and causing trauma to the tissue, the body forms a to scar to isolate the implant in order to protect the nearby tissue. The scar tissue is a result of reactive gliosis and produces an insulated sheath, encapsulating the implant. The glial sheath limits the stimulating or recording capabilities of the implant, reducing its effectiveness over the long term. A favorable interaction with this tissue would be the direct adhesion of neurons onto the contacts of the implant, and the prevention of glial encapsulation. With direct neuronal adhesion the effectiveness and longevity of the device would be significantly improved. Titania nanotube arrays, fabricated using electrochemical anodization, provide a conductive architecture capable of altering cellular response. This work focuses on the fabrication of different titania nanotube array architectures to determine how their structures and properties influence the response of neural tissue, modeled using the C17.2 murine neural stem cell subclone, and if glial encapsulation can be reduced while neuronal adhesion is promoted.
Speech comprehension aided by multiple modalities: behavioural and neural interactions
McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K.
2014-01-01
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. PMID:22266262
Speech comprehension aided by multiple modalities: behavioural and neural interactions.
McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K
2012-04-01
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. Copyright © 2012 Elsevier Ltd. All rights reserved.
Goodus, Matthew T; Guzman, Alanna M; Calderon, Frances; Jiang, Yuhui; Levison, Steven W
2015-01-01
Pediatric traumatic brain injury is a significant problem that affects many children each year. Progress is being made in developing neuroprotective strategies to combat these injuries. However, investigators are a long way from therapies to fully preserve injured neurons and glia. To restore neurological function, regenerative strategies will be required. Given the importance of stem cells in repairing damaged tissues and the known persistence of neural precursors in the subventricular zone (SVZ), we evaluated regenerative responses of the SVZ to a focal brain lesion. As tissues repair more slowly with aging, injury responses of male Sprague Dawley rats at 6, 11, 17, and 60 days of age and C57Bl/6 mice at 14 days of age were compared. In the injured immature animals, cell proliferation in the dorsolateral SVZ more than doubled by 48 h. By contrast, the proliferative response was almost undetectable in the adult brain. Three approaches were used to assess the relative numbers of bona fide neural stem cells, as follows: the neurosphere assay (on rats injured at postnatal day 11, P11), flow cytometry using a novel 4-marker panel (on mice injured at P14) and staining for stem/progenitor cell markers in the niche (on rats injured at P17). Precursors from the injured immature SVZ formed almost twice as many spheres as precursors from uninjured age-matched brains. Furthermore, spheres formed from the injured brain were larger, indicating that the neural precursors that formed these spheres divided more rapidly. Flow cytometry revealed a 2-fold increase in the percentage of stem cells, a 4-fold increase in multipotential progenitor-3 cells and a 2.5-fold increase in glial-restricted progenitor-2/multipotential-3 cells. Analogously, there was a 2-fold increase in the mitotic index of nestin+/Mash1- immunoreactive cells within the immediately subependymal region. As the early postnatal SVZ is predominantly generating glial cells, an expansion of precursors might not necessarily lead to the production of many new neurons. On the contrary, many BrdU+/doublecortin+ cells were observed streaming out of the SVZ into the neocortex 2 weeks after injuries to P11 rats. However, very few new mature neurons were seen adjacent to the lesion 28 days after injury. Altogether, these data indicate that immature SVZ cells mount a more robust proliferative response to a focal brain injury than adult cells, which includes an expansion of stem cells, primitive progenitors and neuroblasts. Nonetheless, this regenerative response does not result in significant neuronal replacement, indicating that new strategies need to be implemented to retain the regenerated neurons and glia that are being produced. © 2014 S. Karger AG, Basel.
Langguth, Berthold; Schecklmann, Martin; Lehner, Astrid; Landgrebe, Michael; Poeppl, Timm Benjamin; Kreuzer, Peter Michal; Schlee, Winfried; Weisz, Nathan; Vanneste, Sven; De Ridder, Dirk
2012-01-01
An inherent limitation of functional imaging studies is their correlational approach. More information about critical contributions of specific brain regions can be gained by focal transient perturbation of neural activity in specific regions with non-invasive focal brain stimulation methods. Functional imaging studies have revealed that tinnitus is related to alterations in neuronal activity of central auditory pathways. Modulation of neuronal activity in auditory cortical areas by repetitive transcranial magnetic stimulation (rTMS) can reduce tinnitus loudness and, if applied repeatedly, exerts therapeutic effects, confirming the relevance of auditory cortex activation for tinnitus generation and persistence. Measurements of oscillatory brain activity before and after rTMS demonstrate that the same stimulation protocol has different effects on brain activity in different patients, presumably related to interindividual differences in baseline activity in the clinically heterogeneous study cohort. In addition to alterations in auditory pathways, imaging techniques also indicate the involvement of non-auditory brain areas, such as the fronto-parietal “awareness” network and the non-tinnitus-specific distress network consisting of the anterior cingulate cortex, anterior insula, and amygdale. Involvement of the hippocampus and the parahippocampal region putatively reflects the relevance of memory mechanisms in the persistence of the phantom percept and the associated distress. Preliminary studies targeting the dorsolateral prefrontal cortex, the dorsal anterior cingulate cortex, and the parietal cortex with rTMS and with transcranial direct current stimulation confirm the relevance of the mentioned non-auditory networks. Available data indicate the important value added by brain stimulation as a complementary approach to neuroimaging for identifying the neuronal correlates of the various clinical aspects of tinnitus. PMID:22509155
Primary Auditory Cortex is Required for Anticipatory Motor Response.
Li, Jingcheng; Liao, Xiang; Zhang, Jianxiong; Wang, Meng; Yang, Nian; Zhang, Jun; Lv, Guanghui; Li, Haohong; Lu, Jian; Ding, Ran; Li, Xingyi; Guang, Yu; Yang, Zhiqi; Qin, Han; Jin, Wenjun; Zhang, Kuan; He, Chao; Jia, Hongbo; Zeng, Shaoqun; Hu, Zhian; Nelken, Israel; Chen, Xiaowei
2017-06-01
The ability of the brain to predict future events based on the pattern of recent sensory experience is critical for guiding animal's behavior. Neocortical circuits for ongoing processing of sensory stimuli are extensively studied, but their contributions to the anticipation of upcoming sensory stimuli remain less understood. We, therefore, used in vivo cellular imaging and fiber photometry to record mouse primary auditory cortex to elucidate its role in processing anticipated stimulation. We found neuronal ensembles in layers 2/3, 4, and 5 which were activated in relationship to anticipated sound events following rhythmic stimulation. These neuronal activities correlated with the occurrence of anticipatory motor responses in an auditory learning task. Optogenetic manipulation experiments revealed an essential role of such neuronal activities in producing the anticipatory behavior. These results strongly suggest that the neural circuits of primary sensory cortex are critical for coding predictive information and transforming it into anticipatory motor behavior. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Neuroscientific evidence for defensive avoidance of fear appeals
Kessels, Loes T E; Ruiter, Robert A C; Wouters, Liesbeth; Jansma, Bernadette M
2014-01-01
Previous studies indicate that people respond defensively to threatening health information, especially when the information challenges self-relevant goals. The authors investigated whether reduced acceptance of self-relevant health risk information is already visible in early attention allocation processes. In two experimental studies, participants were watching high- and low-threat health commercials, and at the same time had to pay attention to specific odd auditory stimuli in a sequence of frequent auditory stimuli (odd ball paradigm). The amount of attention allocation was measured by recording event-related brain potentials (i.e., P300 ERPs) and reaction times. Smokers showed larger P300 amplitudes in response to the auditory targets while watching high-threat instead of low-threat anti-smoking commercials. In contrast, non-smokers showed smaller P300 amplitudes during watching high as opposed to low threat anti-smoking commercials. In conclusion, the findings provide further neuroscientific support for the hypothesis that threatening health information causes more avoidance responses among those for whom the health threat is self-relevant. PMID:24811878
Bisensory stimulation increases gamma-responses over multiple cortical regions.
Sakowitz, O W; Quiroga, R Q; Schürmann, M; Başar, E
2001-04-01
In the framework of the discussion about gamma (approx. 40 Hz) oscillations as information carriers in the brain, we investigated the relationship between gamma responses in the EEG and intersensory association. Auditory evoked potentials (AEPs) and visual evoked potentials (VEPs) were compared with bisensory evoked potentials (BEPs; simultaneous auditory and visual stimulation) in 15 subjects. Gamma responses in AEPs, VEPs and BEPs were assessed by means of wavelet decomposition. Overall maximum gamma-components post-stimulus were highest in BEPs (P < 0.01). Bisensory evoked gamma-responses also showed significant central, parietal and occipital amplitude-increases (P < 0.001, P < 0.01, P < 0.05, respectively; prestimulus interval as baseline). These were of greater magnitude when compared with the unisensory responses. As a correlate of the marked gamma responses to bimodal stimulation we suggest a process of 'intersensory association', i.e. one of the steps between sensory transmission and perception. Our data may be interpreted as a further example of function-related gamma responses in the EEG.
The auditory neural network in man
NASA Technical Reports Server (NTRS)
Galambos, R.
1975-01-01
The principles of anatomy and physiology necessary for understanding brain wave recordings made from the scalp are briefly discussed. Brain waves evoked by sounds are then described and certain of their features are related to the physical aspects of the stimulus and the psychological state of the listener. It is proposed that data obtained through probes located outside the head can reveal a large amount of detail about brain activity. It is argued that analysis of such records enables one to detect the response of the nervous system to an acoustic message at the moment of its inception at the ear, and to follow the progress of the acoustic message up through the various brain levels as progressively more complex operations are performed upon it. Even those brain events responsible for the highest level of signal processing - distinguishing between similar signals and making decisions about them - seem to generate characteristic and identifiable electrical waves.
2017-01-01
Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698
De Risio, Luisa; Lewis, Tom; Freeman, Julia; de Stefani, Alberta; Matiasek, Lara; Blott, Sarah
2011-06-01
The objectives of this study were to estimate prevalence, heritability and genetic correlations of congenital sensorineural deafness (CSD) and pigmentation phenotypes in the Border Collie. Entire litters of Border Collies that presented to the Animal Health Trust (1994-2008) for assessment of hearing status by brain stem auditory evoked response (BAER) at 4-10 weeks of age were included. Heritability and genetic correlations were estimated using residual maximum likelihood (REML). Of 4143 puppies that met the inclusion criteria, 97.6% had normal hearing status, 2.0% were unilaterally deaf and 0.4% were bilaterally deaf. Heritability of deafness as a trichotomous trait (normal/unilaterally deaf/bilaterally deaf) was estimated at 0.42 using multivariate analysis. Genetic correlations of deafness with iris colour and merle coat colour were 0.58 and 0.26, respectively. These results indicate that there is a significant genetic effect on CSD in Border Collies and that some of the genes determining deafness also influence pigmentation phenotypes. Copyright © 2010 Elsevier Ltd. All rights reserved.
[Pneumococcal meningitis revealing dysplasia of the bony labyrinth in an infant].
Louaib, D; François, M; Coderc, E; Dieu, S; Nathanson, M; Narcy, P; Gaudelus, J
1996-03-01
Dysplasias of the bony labyrinth are frequently associated with cerebrospinal fluid fistula and are usually discovered because of recurrent meningitis. A 1 year-old infant was admitted for a pneumococcal meningitis which appeared 2 days after the occurrence of a clear otorrhea from the right ear. The same organism was isolated from the otorrhea fluid, which also contained cerebrospinal fluid as confirmed cytochemically. The meningitis rapidly resolved with antibiotic treatment. Auditory brain stem responses were abolished from the right ear. CT of the temporal bones showed a pseudo-Mondini type labyrinth dysplasia at the right ear and Mondini type dysplasia at the left one. A translabyrinthine cerebrospinal fluid fistula was discovered by surgical exploration of the right ear, occurring through a perforation in the stapedial foot plate. The leak was cured by packing the vestibule and obturating both oval and round windows. Three years after the operation, the child did not experience any further episode of otorrhea or meningitis. Features suggesting a translabyrinthine fistula, especially otorrhea and deafness, should be systematically searched in any child with bacterial meningitis. Closure of these fistulas can prevent severe infectious recurrences.
Human inferior colliculus activity relates to individual differences in spoken language learning
Chandrasekaran, Bharath; Kraus, Nina
2012-01-01
A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural “sharpening” models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models. PMID:22131377
The effect of mobile phone to audiologic system.
Kerekhanjanarong, Virachai; Supiyaphun, Pakpoom; Naratricoon, Jantra; Laungpitackchumpon, Prinya
2005-09-01
Mobile phones have come into widespread use. There are a lot of possible adverse effect to health. Use of mobile phone generate potentially harmful radiofrequency electromagnetic field (EMF) particularly for the hearing aspect. 98 subjects underwent hearing evaluations at Department of Otolaryngology, Faculty of Medicine, King Chulalongkorn Memorial Hospital, Chulalongkorn University. 31 males and 67females, mean age was 30.48 +/- 9.51 years old, all subjects were investigated the hearing level by audiometry, tympanometry, otoacoustic emission (OAE) and auditory brain stem evoked response (ABR). The average of using time were 32.54 +/- 27.64 months, 57 subjects usually used the right side and 41 the left side. Average time of use per day was 26.31 +/- 30.91 minutes (range from 3 to 180 mins). When the authors compared the audiogram, both pure tone and speech audiometry, between the dominant and nondominant side, it indicated that there is no significant different. When the authors focused on the 8 subjects that used the mobile phone more than 60 mins per day. It indicated that the hearing threshold of the dominant ears was worse than the nondominant ears.
Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan
2013-10-01
Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.
Pudar, Goran; Vlaski, Ljiljana; Filipović, Danka; Tanackov, Ilija
2010-01-01
Problems of hearing disturbances in persons suffering from diabetes have been attracting great attention for many decades. In this study we examined the auditory function of 50 patients suffering from diabetes mellitus type 1 of different duration by analyzing results of pure-tone audiometry and brainstem auditory evoked potentials. The obtained results of measuring were compared to 30 healthy subjects from the corresponding age and gender group. The group of diabetic patients was divided according to the disease duration (I group 0-5 years; II group 6-10 years, III group over 10 years). A statistically significant increase of sensorineural hearing loss was found in the diabetics according to the duration of their disease (I group = 14.09%, II group = 21.39%, III group = 104.89%). The results of the brain stem auditory evoked potentials, the significance threshold being p = 0.05 between the controls and the diabetics at all levels of absolute latency of right and left sides, did not show significant differences in the mean values. In the case of interwave latencies, the diabetic patients were found to have a significant qualitative difference of intervals I-III and I-V on both ears in the sense of internal distribution of response. In cases of sensorineural hearing loss we found a significant connection with prolonged latencies of I wave on the right ear and of I and V waves on the left ear. In all probability, the cause of these results could be found in distinctive individuality of the organism reactions to the consequences of this disease (disturbance in the distal part of N. cochlearis). The results of research have shown the existence of a significant sensorineural hearing loss in the patients with diabetes mellitus type 1 in accordance to the disease duration. We also found qualitative changes of brainstem auditory evoked potentials in the diabetic patients in comparison to the controls as well as significant quantitative changes in regard to the presence of sensorineural hearing loss of the patients.
Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J
2014-01-01
Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.
The neural processing of hierarchical structure in music and speech at different timescales
Farbood, Morwaread M.; Heeger, David J.; Marcus, Gary; Hasson, Uri; Lerner, Yulia
2015-01-01
Music, like speech, is a complex auditory signal that contains structures at multiple timescales, and as such is a potentially powerful entry point into the question of how the brain integrates complex streams of information. Using an experimental design modeled after previous studies that used scrambled versions of a spoken story (Lerner et al., 2011) and a silent movie (Hasson et al., 2008), we investigate whether listeners perceive hierarchical structure in music beyond short (~6 s) time windows and whether there is cortical overlap between music and language processing at multiple timescales. Experienced pianists were presented with an extended musical excerpt scrambled at multiple timescales—by measure, phrase, and section—while measuring brain activity with functional magnetic resonance imaging (fMRI). The reliability of evoked activity, as quantified by inter-subject correlation of the fMRI responses, was measured. We found that response reliability depended systematically on musical structure coherence, revealing a topographically organized hierarchy of processing timescales. Early auditory areas (at the bottom of the hierarchy) responded reliably in all conditions. For brain areas at the top of the hierarchy, the original (unscrambled) excerpt evoked more reliable responses than any of the scrambled excerpts, indicating that these brain areas process long-timescale musical structures, on the order of minutes. The topography of processing timescales was analogous with that reported previously for speech, but the timescale gradients for music and speech overlapped with one another only partially, suggesting that temporally analogous structures—words/measures, sentences/musical phrases, paragraph/sections—are processed separately. PMID:26029037
The neural processing of hierarchical structure in music and speech at different timescales.
Farbood, Morwaread M; Heeger, David J; Marcus, Gary; Hasson, Uri; Lerner, Yulia
2015-01-01
Music, like speech, is a complex auditory signal that contains structures at multiple timescales, and as such is a potentially powerful entry point into the question of how the brain integrates complex streams of information. Using an experimental design modeled after previous studies that used scrambled versions of a spoken story (Lerner et al., 2011) and a silent movie (Hasson et al., 2008), we investigate whether listeners perceive hierarchical structure in music beyond short (~6 s) time windows and whether there is cortical overlap between music and language processing at multiple timescales. Experienced pianists were presented with an extended musical excerpt scrambled at multiple timescales-by measure, phrase, and section-while measuring brain activity with functional magnetic resonance imaging (fMRI). The reliability of evoked activity, as quantified by inter-subject correlation of the fMRI responses, was measured. We found that response reliability depended systematically on musical structure coherence, revealing a topographically organized hierarchy of processing timescales. Early auditory areas (at the bottom of the hierarchy) responded reliably in all conditions. For brain areas at the top of the hierarchy, the original (unscrambled) excerpt evoked more reliable responses than any of the scrambled excerpts, indicating that these brain areas process long-timescale musical structures, on the order of minutes. The topography of processing timescales was analogous with that reported previously for speech, but the timescale gradients for music and speech overlapped with one another only partially, suggesting that temporally analogous structures-words/measures, sentences/musical phrases, paragraph/sections-are processed separately.
Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M
2012-08-01
This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Petersen, Christopher L; Timothy, Miky; Kim, D Spencer; Bhandiwad, Ashwin A; Mohr, Robert A; Sisneros, Joseph A; Forlano, Paul M
2013-01-01
While the neural circuitry and physiology of the auditory system is well studied among vertebrates, far less is known about how the auditory system interacts with other neural substrates to mediate behavioral responses to social acoustic signals. One species that has been the subject of intensive neuroethological investigation with regard to the production and perception of social acoustic signals is the plainfin midshipman fish, Porichthys notatus, in part because acoustic communication is essential to their reproductive behavior. Nesting male midshipman vocally court females by producing a long duration advertisement call. Females localize males by their advertisement call, spawn and deposit all their eggs in their mate's nest. As multiple courting males establish nests in close proximity to one another, the perception of another male's call may modulate individual calling behavior in competition for females. We tested the hypothesis that nesting males exposed to advertisement calls of other males would show elevated neural activity in auditory and vocal-acoustic brain centers as well as differential activation of catecholaminergic neurons compared to males exposed only to ambient noise. Experimental brains were then double labeled by immunofluorescence (-ir) for tyrosine hydroxylase (TH), an enzyme necessary for catecholamine synthesis, and cFos, an immediate-early gene product used as a marker for neural activation. Males exposed to other advertisement calls showed a significantly greater percentage of TH-ir cells colocalized with cFos-ir in the noradrenergic locus coeruleus and the dopaminergic periventricular posterior tuberculum, as well as increased numbers of cFos-ir neurons in several levels of the auditory and vocal-acoustic pathway. Increased activation of catecholaminergic neurons may serve to coordinate appropriate behavioral responses to male competitors. Additionally, these results implicate a role for specific catecholaminergic neuronal groups in auditory-driven social behavior in fishes, consistent with a conserved function in social acoustic behavior across vertebrates.
Petersen, Christopher L.; Timothy, Miky; Kim, D. Spencer; Bhandiwad, Ashwin A.; Mohr, Robert A.; Sisneros, Joseph A.; Forlano, Paul M.
2013-01-01
While the neural circuitry and physiology of the auditory system is well studied among vertebrates, far less is known about how the auditory system interacts with other neural substrates to mediate behavioral responses to social acoustic signals. One species that has been the subject of intensive neuroethological investigation with regard to the production and perception of social acoustic signals is the plainfin midshipman fish, Porichthys notatus, in part because acoustic communication is essential to their reproductive behavior. Nesting male midshipman vocally court females by producing a long duration advertisement call. Females localize males by their advertisement call, spawn and deposit all their eggs in their mate’s nest. As multiple courting males establish nests in close proximity to one another, the perception of another male’s call may modulate individual calling behavior in competition for females. We tested the hypothesis that nesting males exposed to advertisement calls of other males would show elevated neural activity in auditory and vocal-acoustic brain centers as well as differential activation of catecholaminergic neurons compared to males exposed only to ambient noise. Experimental brains were then double labeled by immunofluorescence (-ir) for tyrosine hydroxylase (TH), an enzyme necessary for catecholamine synthesis, and cFos, an immediate-early gene product used as a marker for neural activation. Males exposed to other advertisement calls showed a significantly greater percentage of TH-ir cells colocalized with cFos-ir in the noradrenergic locus coeruleus and the dopaminergic periventricular posterior tuberculum, as well as increased numbers of cFos-ir neurons in several levels of the auditory and vocal-acoustic pathway. Increased activation of catecholaminergic neurons may serve to coordinate appropriate behavioral responses to male competitors. Additionally, these results implicate a role for specific catecholaminergic neuronal groups in auditory-driven social behavior in fishes, consistent with a conserved function in social acoustic behavior across vertebrates. PMID:23936438
Lavigne, Katie M; Woodward, Todd S
2018-04-01
Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Jafari, Zahra; Esmaili, Mahdiye; Delbari, Ahmad; Mehrpour, Masoud; Mohajerani, Majid H
2016-06-01
There have been a few reports about the effects of chronic stroke on auditory temporal processing abilities and no reports regarding the effects of brain damage lateralization on these abilities. Our study was performed on 2 groups of chronic stroke patients to compare the effects of hemispheric lateralization of brain damage and of age on auditory temporal processing. Seventy persons with normal hearing, including 25 normal controls, 25 stroke patients with damage to the right brain, and 20 stroke patients with damage to the left brain, without aphasia and with an age range of 31-71 years were studied. A gap-in-noise (GIN) test and a duration pattern test (DPT) were conducted for each participant. Significant differences were found between the 3 groups for GIN threshold, overall GIN percent score, and DPT percent score in both ears (P ≤ .001). For all stroke patients, performance in both GIN and DPT was poorer in the ear contralateral to the damaged hemisphere, which was significant in DPT and in 2 measures of GIN (P ≤ .046). Advanced age had a negative relationship with temporal processing abilities for all 3 groups. In cases of confirmed left- or right-side stroke involving auditory cerebrum damage, poorer auditory temporal processing is associated with the ear contralateral to the damaged cerebral hemisphere. Replication of our results and the use of GIN and DPT tests for the early diagnosis of auditory processing deficits and for monitoring the effects of aural rehabilitation interventions are recommended. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Blast-Induced Tinnitus and Hearing Loss in Rats: Behavioral and Imaging Assays
Mao, Johnny C.; Pace, Edward; Pierozynski, Paige; Kou, Zhifeng; Shen, Yimin; VandeVord, Pamela; Haacke, E. Mark; Zhang, Xueguo
2012-01-01
Abstract The current study used a rat model to investigate the underlying mechanisms of blast-induced tinnitus, hearing loss, and associated traumatic brain injury (TBI). Seven rats were used to evaluate behavioral evidence of tinnitus and hearing loss, and TBI using magnetic resonance imaging following a single 10-msec blast at 14 psi or 194 dB sound pressure level (SPL). The results demonstrated that the blast exposure induced early onset of tinnitus and central hearing impairment at a broad frequency range. The induced tinnitus and central hearing impairment tended to shift towards high frequencies over time. Hearing threshold measured with auditory brainstem responses also showed an immediate elevation followed by recovery on day 14, coinciding with behaviorally-measured results. Diffusion tensor magnetic resonance imaging results demonstrated significant damage and compensatory plastic changes to certain auditory brain regions, with the majority of changes occurring in the inferior colliculus and medial geniculate body. No significant microstructural changes found in the corpus callosum indicates that the currently adopted blast exposure mainly exerts effects through the auditory pathways rather than through direct impact onto the brain parenchyma. The results showed that this animal model is appropriate for investigation of the mechanisms underlying blast-induced tinnitus, hearing loss, and related TBI. Continued investigation along these lines will help identify pathology with injury/recovery patterns, aiding development of effective treatment strategies. PMID:21933015
Delays in auditory processing identified in preschool children with FASD
Stephen, Julia M.; Kodituwakku, Piyadasa W.; Kodituwakku, Elizabeth L.; Romero, Lucinda; Peters, Amanda M.; Sharadamma, Nirupama Muniswamy; Caprihan, Arvind; Coffman, Brian A.
2012-01-01
Background Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool aged children. Since sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Materials and Methods Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control children aged 3-6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multi-dipole spatio-temporal modeling technique (CSST – Ranken et al. 2002) to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. Results There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Discussion Auditory delay revealed by MEG in children with FASD may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. PMID:22458372
Hearing impairment in the P23H-1 retinal degeneration rat model
Sotoca, Jorge V.; Alvarado, Juan C.; Fuentes-Santamaría, Verónica; Martinez-Galan, Juan R.; Caminos, Elena
2014-01-01
The transgenic P23H line 1 (P23H-1) rat expresses a variant of rhodopsin with a mutation that leads to loss of visual function. This rat strain is an experimental model usually employed to study photoreceptor degeneration. Although the mutated protein should not interfere with other sensory functions, observing severe loss of auditory reflexes in response to natural sounds led us to study auditory brain response (ABR) recording. Animals were separated into different hearing levels following the response to natural stimuli (hand clapping and kissing sounds). Of all the analyzed animals, 25.9% presented auditory loss before 50 days of age (P50) and 45% were totally deaf by P200. ABR recordings showed that all the rats had a higher hearing threshold than the control Sprague-Dawley (SD) rats, which was also higher than any other rat strains. The integrity of the central and peripheral auditory pathway was analyzed by histology and immunocytochemistry. In the cochlear nucleus (CN), statistical differences were found between SD and P23H-1 rats in VGluT1 distribution, but none were found when labeling all the CN synapses with anti-Syntaxin. This finding suggests anatomical and/or molecular abnormalities in the auditory downstream pathway. The inner ear of the hypoacusic P23H-1 rats showed several anatomical defects, including loss and disruption of hair cells and spiral ganglion neurons. All these results can explain, at least in part, how hearing impairment can occur in a high percentage of P23H-1 rats. P23H-1 rats may be considered an experimental model with visual and auditory dysfunctions in future research. PMID:25278831
Delays in auditory processing identified in preschool children with FASD.
Stephen, Julia M; Kodituwakku, Piyadasa W; Kodituwakku, Elizabeth L; Romero, Lucinda; Peters, Amanda M; Sharadamma, Nirupama M; Caprihan, Arvind; Coffman, Brian A
2012-10-01
Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool-aged children. As sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control (HC) children aged 3 to 6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1,000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multidipole spatio-temporal modeling technique to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Auditory delay revealed by MEG in children with FASDs may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. Copyright © 2012 by the Research Society on Alcoholism.
Cortical Mechanisms of Speech Perception in Noise
ERIC Educational Resources Information Center
Wong, Patrick C. M.; Uppunda, Ajith K.; Parrish, Todd B.; Dhar, Sumitrajit
2008-01-01
Purpose: The present study examines the brain basis of listening to spoken words in noise, which is a ubiquitous characteristic of communication, with the focus on the dorsal auditory pathway. Method: English-speaking young adults identified single words in 3 listening conditions while their hemodynamic response was measured using fMRI: speech in…
Le Scao, Y; Robier, A; Baulieu, J L; Beutter, P; Pourcelot, L
1992-01-01
Brain activation procedures associated with single photon emission tomography (SPET) have recently been developed in healthy controls and diseased patients in order to help in their diagnosis and treatment. We investigated the effects of a promontory test (PT) on the cerebral distribution of technetium-99m hexamethylpropylene amine oxime (99mTc-HMPAO) in 7 profoundly deaf patients, 6 PT+ and one PT-. The count variation in the temporal lobe was calculated on 6 coronal slices using the ratio (Rstimulation-Rdeprivation)/Rdeprivation where R = counts in the temporal lobe/whole-brain count. A count increase in the temporal lobe was observed in all patients and was higher in all patients with PT+ than in the patient with PT-. The problems of head positioning and resolution of the system were taken into account, and we considered that the maximal count increment was related to the auditory cortex response to the stimulus. Further clinical investigations with high-resolution systems have to be performed in order to validate this presurgery test in cochlear implant assessment.
Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise
Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.
2013-01-01
Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354
Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.
Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E
2013-01-01
Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.
Tağluk, M E; Cakmak, E D; Karakaş, S
2005-04-30
Cognitive brain responses to external stimuli, as measured by event related potentials (ERPs), have been analyzed from a variety of perspectives to investigate brain dynamics. Here, the brain responses of healthy subjects to auditory oddball paradigms, standard and deviant stimuli, recorded on an Fz electrode site were studied using a short-term version of the smoothed Wigner-Ville distribution (STSW) method. A smoothing kernel was designed to preserve the auto energy of the signal with maximum time and frequency resolutions. Analysis was conducted mainly on the time-frequency distributions (TFDs) of sweeps recorded during successive trials including the TFD of averaged single sweeps as the evoked time-frequency (ETF) brain response and the average of TFDs of single sweeps as the time-frequency (TF) brain response. Also the power entropy and the phase angles of the signal at frequency f and time t locked to the stimulus onset were studied across single trials as the TF power-locked and the TF phase-locked brain responses, respectively. TFDs represented in this way demonstrated the ERP spectro-temporal characteristics from multiple perspectives. The time-varying energy of the individual components manifested interesting TF structures in the form of amplitude modulated (AM) and frequency modulated (FM) energy bursts. The TF power-locked and phase-locked brain responses provoked ERP energies in a manner modulated by cognitive functions, an observation requiring further investigation. These results may lead to a better understanding of integrative brain dynamics.
Neurobiology of rhythmic motor entrainment.
Molinari, Marco; Leggio, Maria G; De Martin, Martina; Cerasa, Antonio; Thaut, Michael
2003-11-01
Timing is extremely important for movement, and understanding the neurobiological basis of rhythm perception and reproduction can be helpful in addressing motor recovery after brain lesions. In this quest, the science of music might provide interesting hints for better understanding the brain timing mechanism. The report focuses on the neurobiological substrate of sensorimotor transformation of time data, highlighting the power of auditory rhythmic stimuli in guiding motor acts. The cerebellar role of timing is addressed in subjects with cerebellar damage; subsequently, cerebellar timing processing is highlighted through an fMRI study of professional musicians. The two approaches converge to demonstrate that different levels of time processing exist, one conscious and one not, and to support the idea that timing is a distributed function. The hypothesis that unconscious motor responses to auditory rhythmic stimuli can be relevant in guiding motor recovery and modulating music perception is advanced and discussed.
Practiced musical style shapes auditory skills.
Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari
2012-04-01
Musicians' processing of sounds depends highly on instrument, performance practice, and level of expertise. Here, we measured the mismatch negativity (MMN), a preattentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, and rock/pop) and in nonmusicians using a novel, fast, and musical sounding multifeature MMN paradigm. We found MMN to all six deviants, showing that MMN paradigms can be adapted to resemble a musical context. Furthermore, we found that jazz musicians had larger MMN amplitude than all other experimental groups across all sound features, indicating greater overall sensitivity to auditory outliers. Furthermore, we observed a tendency toward shorter latency of the MMN to all feature changes in jazz musicians compared to band musicians. These findings indicate that the characteristics of the style of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in music. © 2012 New York Academy of Sciences.
Bachiller, Alejandro; Romero, Sergio; Molina, Vicente; Alonso, Joan F; Mañanas, Miguel A; Poza, Jesús; Hornero, Roberto
2015-12-01
The present study investigates the neural substrates underlying cognitive processing in schizophrenia (Sz) patients. To this end, an auditory 3-stimulus oddball paradigm was used to identify P3a and P3b components, elicited by rare-distractor and rare-target tones, respectively. Event-related potentials (ERP) were recorded from 31 Sz patients and 38 healthy controls. The P3a and P3b brain-source generators were identified by time-averaging of low-resolution brain electromagnetic tomography (LORETA) current density images. In contrast with the commonly used fixed window of interest (WOI), we proposed to apply an adaptive WOI, which takes into account subjects' P300 latency variability. Our results showed different P3a and P3b source activation patterns in both groups. P3b sources included frontal, parietal and limbic lobes, whereas P3a response generators were localized over bilateral frontal and superior temporal regions. These areas have been related to the discrimination of auditory stimulus and to the inhibition (P3a) or the initiation (P3b) of motor response in a cognitive task. In addition, differences in source localization between Sz and control groups were observed. Sz patients showed lower P3b source activity in bilateral frontal structures and the cingulate. P3a generators were less widespread for Sz patients than for controls in right superior, medial and middle frontal gyrus. Our findings suggest that target and distractor processing involves distinct attentional subsystems, both being altered in Sz. Hence, the study of neuroelectric brain information can provide further insights to understand cognitive processes and underlying mechanisms in Sz. Copyright © 2015 Elsevier B.V. All rights reserved.
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Beer, Anton L.; Plank, Tina; Meyer, Georg; Greenlee, Mark W.
2013-01-01
Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex. PMID:23407860
Zenner, Hans P; Pfister, Markus; Birbaumer, Niels
2006-12-01
Acquired centralized tinnitus (ACT) is the most frequent form of chronic tinnitus. The proposed ACT sensitization (ACTS) assumes a peripheral initiation of tinnitus whereby sensitizing signals from the auditory system establish new neuronal connections in the brain. Consequently, permanent neurophysiological malfunction within the information-processing modules results. Successful treatment has to target these malfunctioning information processing. We present in this study the neurophysiological and psychophysiological aspects of a recently suggested neurophysiological model, which may explain the symptoms caused by central cognitive tinnitus sensitization. Although conditioned reflexes, as a causal agent of chronic tinnitus, respond to extinction procedures, sensitization may initiate a vicious circle of overexcitation of the auditory system, resisting extinction and habituation. We used the literature database as indicated under "References" covering English and German works. For the ACTS model we extracted neurophysiological hypotheses of the auditory stimulus processing and the neuronal connections of the central auditory system with other brain regions to explain the malfunctions of auditory information processing. The model does not assume information-processing changes specific for tinnitus but treats the processing of tinnitus signals comparable with the processing of other external stimuli. The model uses the extensive knowledge available on sensitization of perception and memory processes and highlights the similarities of tinnitus with central neuropathic pain. Quality, validity, and comparability of the extracted data were evaluated by peer reviewing. Statistical techniques were not used. According to the tinnitus sensitization model, a tinnitus signal originates (as a type I-IV tinnitus) in the cochlea. In the brain, concerned with perception and cognition, the 1) conditioned associations, as postulated by the tinnitus model of Jastreboff, and the 2) unconditioned sensitized stimulus responses, as postulated in the present ACTS model, are actively connected with and attributed to the tinnitus signal. Attention to the tinnitus constitutes a typical undesired sensitized response. Some of the tinnitus-associated attributes may be called essential, unconditioned sensitization attributes. By a process called facilitation, the tinnitus' essential attributes are suggested to activate the tinnitus response. The result is an undesired increase in responsivity, such as an increase in attentional focus to the eliciting tinnitus stimulus. The mechanisms underlying sensitization are known as a specific nonassociative learning process producing a structural fixation of long-term facilitation at the synaptic level. This sensitization model may be important for the development of a sensitization-specific treatment if extinction procedures alone do not lead to satisfactory outcome. Inasmuch as this model considers sensitization as a nonassociative learning process based on cortical plasticity, it is reasonable to assume that this learning process can be altered by counteracting learning procedures. These counteracting learning procedures may consist of tinnitus-specific cognitive and behavioral procedures.
Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc; Cachia, Arnaud
2011-01-01
Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N=12) and patients with only inner space hallucinations (N=15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the "where" auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge.
2012-01-01
exposed mice showed significant injury (Figure). The injury level was more on the medial contra- lateral side of the brain than the ipsilateral side. The...code) JRRD Volume 49, Number 7, 2012Pages 1153–1162Preliminary studies on differential expression of auditory functional genes in the brain after...hearing- related genes in different regions of the brain 6 h after repeated blast exposures in mice. Preliminary data showed that the expres- sion of
Hearing loss and the central auditory system: Implications for hearing aids
NASA Astrophysics Data System (ADS)
Frisina, Robert D.
2003-04-01
Hearing loss can result from disorders or damage to the ear (peripheral auditory system) or the brain (central auditory system). Here, the basic structure and function of the central auditory system will be highlighted as relevant to cases of permanent hearing loss where assistive devices (hearing aids) are called for. The parts of the brain used for hearing are altered in two basic ways in instances of hearing loss: (1) Damage to the ear can reduce the number and nature of input channels that the brainstem receives from the ear, causing plasticity of the central auditory system. This plasticity may partially compensate for the peripheral loss, or add new abnormalities such as distorted speech processing or tinnitus. (2) In some situations, damage to the brain can occur independently of the ear, as may occur in cases of head trauma, tumors or aging. Implications of deficits to the central auditory system for speech perception in noise, hearing aid use and future innovative circuit designs will be provided to set the stage for subsequent presentations in this special educational session. [Work supported by NIA-NIH Grant P01 AG09524 and the International Center for Hearing & Speech Research, Rochester, NY.
Wolak, Tomasz; Cieśla, Katarzyna; Rusiniak, Mateusz; Piłka, Adam; Lewandowska, Monika; Pluta, Agnieszka; Skarżyński, Henryk; Skarżyński, Piotr H
2016-11-28
BACKGROUND The goal of the fMRI experiment was to explore the involvement of central auditory structures in pathomechanisms of a behaviorally manifested auditory temporary threshold shift in humans. MATERIAL AND METHODS The material included 18 healthy volunteers with normal hearing. Subjects in the exposure group were presented with 15 min of binaural acoustic overstimulation of narrowband noise (3 kHz central frequency) at 95 dB(A). The control group was not exposed to noise but instead relaxed in silence. Auditory fMRI was performed in 1 session before and 3 sessions after acoustic overstimulation and involved 3.5-4.5 kHz sweeps. RESULTS The outcomes of the study indicate a possible effect of acoustic overstimulation on central processing, with decreased brain responses to auditory stimulation up to 20 min after exposure to noise. The effect can be seen already in the primary auditory cortex. Decreased BOLD signal change can be due to increased excitation thresholds and/or increased spontaneous activity of auditory neurons throughout the auditory system. CONCLUSIONS The trial shows that fMRI can be a valuable tool in acoustic overstimulation studies but has to be used with caution and considered complimentary to audiological measures. Further methodological improvements are needed to distinguish the effects of TTS and neuronal habituation to repetitive stimulation.
Wolak, Tomasz; Cieśla, Katarzyna; Rusiniak, Mateusz; Piłka, Adam; Lewandowska, Monika; Pluta, Agnieszka; Skarżyński, Henryk; Skarżyński, Piotr H.
2016-01-01
Background The goal of the fMRI experiment was to explore the involvement of central auditory structures in pathomechanisms of a behaviorally manifested auditory temporary threshold shift in humans. Material/Methods The material included 18 healthy volunteers with normal hearing. Subjects in the exposure group were presented with 15 min of binaural acoustic overstimulation of narrowband noise (3 kHz central frequency) at 95 dB(A). The control group was not exposed to noise but instead relaxed in silence. Auditory fMRI was performed in 1 session before and 3 sessions after acoustic overstimulation and involved 3.5–4.5 kHz sweeps. Results The outcomes of the study indicate a possible effect of acoustic overstimulation on central processing, with decreased brain responses to auditory stimulation up to 20 min after exposure to noise. The effect can be seen already in the primary auditory cortex. Decreased BOLD signal change can be due to increased excitation thresholds and/or increased spontaneous activity of auditory neurons throughout the auditory system. Conclusions The trial shows that fMRI can be a valuable tool in acoustic overstimulation studies but has to be used with caution and considered complimentary to audiological measures. Further methodological improvements are needed to distinguish the effects of TTS and neuronal habituation to repetitive stimulation. PMID:27893698
Forlano, Paul M; Licorish, Roshney R; Ghahramani, Zachary N; Timothy, Miky; Ferrari, Melissa; Palmer, William C; Sisneros, Joseph A
2017-10-01
Little is known regarding the coordination of audition with decision-making and subsequent motor responses that initiate social behavior including mate localization during courtship. Using the midshipman fish model, we tested the hypothesis that the time spent by females attending and responding to the advertisement call is correlated with the activation of a specific subset of catecholaminergic (CA) and social decision-making network (SDM) nuclei underlying auditory- driven sexual motivation. In addition, we quantified the relationship of neural activation between CA and SDM nuclei in all responders with the goal of providing a map of functional connectivity of the circuitry underlying a motivated state responsive to acoustic cues during mate localization. In order to make a baseline qualitative comparison of this functional brain map to unmotivated females, we made a similar correlative comparison of brain activation in females who were unresponsive to the advertisement call playback. Our results support an important role for dopaminergic neurons in the periventricular posterior tuberculum and ventral thalamus, putative A11 and A13 tetrapod homologues, respectively, as well as the posterior parvocellular preoptic area and dorsomedial telencephalon, (laterobasal amygdala homologue) in auditory attention and appetitive sexual behavior in fishes. These findings may also offer insights into the function of these highly conserved nuclei in the context of auditory-driven reproductive social behavior across vertebrates. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Cortical evoked potentials to an auditory illusion: binaural beats.
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
2009-08-01
To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.
Cortical Evoked Potentials to an Auditory Illusion: Binaural Beats
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
2009-01-01
Objective: To define brain activity corresponding to an auditory illusion of 3 and 6 Hz binaural beats in 250 Hz or 1,000 Hz base frequencies, and compare it to the sound onset response. Methods: Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000 Hz to one ear and 3 or 6 Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3 Hz and 6 Hz, in base frequencies of 250 Hz and 1000 Hz. Tones were 2,000 ms in duration and presented with approximately 1 s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. Results: All stimuli evoked tone-onset P50, N100 and P200 components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P50 had significantly different sources than the beats-evoked oscillations; and N100 and P200 sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Conclusions: Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Significance: Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp. PMID:19616993
Low-Frequency Cortical Oscillations Entrain to Subthreshold Rhythmic Auditory Stimuli
Schroeder, Charles E.; Poeppel, David; van Atteveldt, Nienke
2017-01-01
Many environmental stimuli contain temporal regularities, a feature that can help predict forthcoming input. Phase locking (entrainment) of ongoing low-frequency neuronal oscillations to rhythmic stimuli is proposed as a potential mechanism for enhancing neuronal responses and perceptual sensitivity, by aligning high-excitability phases to events within a stimulus stream. Previous experiments show that rhythmic structure has a behavioral benefit even when the rhythm itself is below perceptual detection thresholds (ten Oever et al., 2014). It is not known whether this “inaudible” rhythmic sound stream also induces entrainment. Here we tested this hypothesis using magnetoencephalography and electrocorticography in humans to record changes in neuronal activity as subthreshold rhythmic stimuli gradually became audible. We found that significant phase locking to the rhythmic sounds preceded participants' detection of them. Moreover, no significant auditory-evoked responses accompanied this prethreshold entrainment. These auditory-evoked responses, distinguished by robust, broad-band increases in intertrial coherence, only appeared after sounds were reported as audible. Taken together with the reduced perceptual thresholds observed for rhythmic sequences, these findings support the proposition that entrainment of low-frequency oscillations serves a mechanistic role in enhancing perceptual sensitivity for temporally predictive sounds. This framework has broad implications for understanding the neural mechanisms involved in generating temporal predictions and their relevance for perception, attention, and awareness. SIGNIFICANCE STATEMENT The environment is full of rhythmically structured signals that the nervous system can exploit for information processing. Thus, it is important to understand how the brain processes such temporally structured, regular features of external stimuli. Here we report the alignment of slowly fluctuating oscillatory brain activity to external rhythmic structure before its behavioral detection. These results indicate that phase alignment is a general mechanism of the brain to process rhythmic structure and can occur without the perceptual detection of this temporal structure. PMID:28411273
Montie, Eric W; Manire, Charlie A; Mann, David A
2011-03-15
In June 2008, two pygmy killer whales (Feresa attenuata) were stranded alive near Boca Grande, FL, USA, and were taken into rehabilitation. We used this opportunity to learn about the peripheral anatomy of the auditory system and hearing sensitivity of these rare toothed whales. Three-dimensional (3-D) reconstructions of head structures from X-ray computed tomography (CT) images revealed mandibles that were hollow, lacked a bony lamina medial to the pan bone and contained mandibular fat bodies that extended caudally and abutted the tympanoperiotic complex. Using auditory evoked potential (AEP) procedures, the modulation rate transfer function was determined. Maximum evoked potential responses occurred at modulation frequencies of 500 and 1000 Hz. The AEP-derived audiograms were U-shaped. The lowest hearing thresholds occurred between 20 and 60 kHz, with the best hearing sensitivity at 40 kHz. The auditory brainstem response (ABR) was composed of seven waves and resembled the ABR of the bottlenose and common dolphins. By changing electrode locations, creating 3-D reconstructions of the brain from CT images and measuring the amplitude of the ABR waves, we provided evidence that the neuroanatomical sources of ABR waves I, IV and VI were the auditory nerve, inferior colliculus and the medial geniculate body, respectively. The combination of AEP testing and CT imaging provided a new synthesis of methods for studying the auditory system of cetaceans.
Impey, Danielle; Baddeley, Ashley; Nelson, Renee; Labelle, Alain; Knott, Verner
2017-11-01
Cognitive impairment has been proposed to be the core feature of schizophrenia (Sz). Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which can improve cognitive function in healthy participants and in psychiatric patients with cognitive deficits. tDCS has been shown to improve cognition and hallucination symptoms in Sz, a disorder also associated with marked sensory processing deficits. Recent findings in healthy controls demonstrate that anodal tDCS increases auditory deviance detection, as measured by the brain-based event-related potential, mismatch negativity (MMN), which is a putative biomarker of Sz that has been proposed as a target for treatment of Sz cognition. This pilot study conducted a randomized, double-blind assessment of the effects of pre- and post-tDCS on MMN-indexed auditory discrimination in 12 Sz patients, moderated by auditory hallucination (AH) presence, as well as working memory performance. Assessments were conducted in three sessions involving temporal and frontal lobe anodal stimulation (to transiently excite local brain activity), and one control session involving 'sham' stimulation (meaning with the device turned off, i.e., no stimulation). Results demonstrated a trend for pitch MMN amplitude to increase with anodal temporal tDCS, which was significant in a subgroup of Sz individuals with AHs. Anodal frontal tDCS significantly increased WM performance on the 2-back task, which was found to positively correlate with MMN-tDCS effects. The findings contribute to our understanding of tDCS effects for sensory processing deficits and working memory performance in Sz and may have implications for psychiatric disorders with sensory deficits.
Genetic modification of ALAD and VDR on lead-induced impairment of hearing in children.
Pawlas, Natalia; Broberg, Karin; Olewińska, Elżbieta; Kozłowska, Agnieszka; Skerfving, Staffan; Pawlas, Krystyna
2015-05-01
Polymorphisms in the δ-aminolevulinic acid dehydratase (ALAD) and the vitamin D receptor (VDR) genes may modify lead metabolism and neurotoxicity. Two cohorts of children were examined for hearing [pure-tone audiometry (PTA), brain stem auditory evoked potentials (BAEP)], acoustic otoemission (transient emission evoked by a click) and blood-lead concentrations (B-Pb). The children were genotyped for polymorphisms in ALAD and VDR. The median B-Pbs were 55 and 36μg/L in the two cohorts (merged cohort 45μg/L). B-Pb was significantly associated with impaired hearing when tested with PTA (correlation coefficient rS=0.12; P<0.01), BAEP (rS=0.18; P<0.001) and otoemission (rS=-0.24; P<0.001). VDR significantly modified the lead-induced effects on PTA. Carriers of the VDR alleles BsmI B, VDR TaqI t and VDR FokI F showed greater toxic effects on PTA, compared to BsmI bb, VDR TaqI TT and VDR FokI ff carriers. No significant interaction was found for ALAD. Lead impairs hearing functions in the route from the cochlea to the brain stem at low-level exposure, and polymorphisms in VDR significantly modify these effects. Copyright © 2015 Elsevier B.V. All rights reserved.
A role for descending auditory cortical projections in songbird vocal learning
Mandelblat-Cerf, Yael; Las, Liora; Denisenko, Natalia; Fee, Michale S
2014-01-01
Many learned motor behaviors are acquired by comparing ongoing behavior with an internal representation of correct performance, rather than using an explicit external reward. For example, juvenile songbirds learn to sing by comparing their song with the memory of a tutor song. At present, the brain regions subserving song evaluation are not known. In this study, we report several findings suggesting that song evaluation involves an avian 'cortical' area previously shown to project to the dopaminergic midbrain and other downstream targets. We find that this ventral portion of the intermediate arcopallium (AIV) receives inputs from auditory cortical areas, and that lesions of AIV result in significant deficits in vocal learning. Additionally, AIV neurons exhibit fast responses to disruptive auditory feedback presented during singing, but not during nonsinging periods. Our findings suggest that auditory cortical areas may guide learning by transmitting song evaluation signals to the dopaminergic midbrain and/or other subcortical targets. DOI: http://dx.doi.org/10.7554/eLife.02152.001 PMID:24935934
Albouy, Philippe; Mattout, Jérémie; Sanchez, Gaëtan; Tillmann, Barbara; Caclin, Anne
2015-01-01
Congenital amusia is a neuro-developmental disorder that primarily manifests as a difficulty in the perception and memory of pitch-based materials, including music. Recent findings have shown that the amusic brain exhibits altered functioning of a fronto-temporal network during pitch perception and short-term memory. Within this network, during the encoding of melodies, a decreased right backward frontal-to-temporal connectivity was reported in amusia, along with an abnormal connectivity within and between auditory cortices. The present study investigated whether connectivity patterns between these regions were affected during the short-term memory retrieval of melodies. Amusics and controls had to indicate whether sequences of six tones that were presented in pairs were the same or different. When melodies were different only one tone changed in the second melody. Brain responses to the changed tone in "Different" trials and to its equivalent (original) tone in "Same" trials were compared between groups using Dynamic Causal Modeling (DCM). DCM results confirmed that congenital amusia is characterized by an altered effective connectivity within and between the two auditory cortices during sound processing. Furthermore, right temporal-to-frontal message passing was altered in comparison to controls, with notably an increase in "Same" trials. An additional analysis in control participants emphasized that the detection of an unexpected event in the typically functioning brain is supported by right fronto-temporal connections. The results can be interpreted in a predictive coding framework as reflecting an abnormal prediction error sent by temporal auditory regions towards frontal areas in the amusic brain.
Emerging technologies with potential for objectively evaluating speech recognition skills.
Rawool, Vishakha Waman
2016-01-01
Work-related exposure to noise and other ototoxins can cause damage to the cochlea, synapses between the inner hair cells, the auditory nerve fibers, and higher auditory pathways, leading to difficulties in recognizing speech. Procedures designed to determine speech recognition scores (SRS) in an objective manner can be helpful in disability compensation cases where the worker claims to have poor speech perception due to exposure to noise or ototoxins. Such measures can also be helpful in determining SRS in individuals who cannot provide reliable responses to speech stimuli, including patients with Alzheimer's disease, traumatic brain injuries, and infants with and without hearing loss. Cost-effective neural monitoring hardware and software is being rapidly refined due to the high demand for neurogaming (games involving the use of brain-computer interfaces), health, and other applications. More specifically, two related advances in neuro-technology include relative ease in recording neural activity and availability of sophisticated analysing techniques. These techniques are reviewed in the current article and their applications for developing objective SRS procedures are proposed. Issues related to neuroaudioethics (ethics related to collection of neural data evoked by auditory stimuli including speech) and neurosecurity (preservation of a person's neural mechanisms and free will) are also discussed.
Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Huotilainen, Minna
2015-03-01
Adult musicians show superior neural sound discrimination when compared to nonmusicians. However, it is unclear whether these group differences reflect the effects of experience or preexisting neural enhancement in individuals who seek out musical training. Tracking how brain function matures over time in musically trained and nontrained children can shed light on this issue. Here, we review our recent longitudinal event-related potential (ERP) studies that examine how formal musical training and less formal musical activities influence the maturation of brain responses related to sound discrimination and auditory attention. These studies found that musically trained school-aged children and preschool-aged children attending a musical playschool show more rapid maturation of neural sound discrimination than their control peers. Importantly, we found no evidence for pretraining group differences. In a related cross-sectional study, we found ERP and behavioral evidence for improved executive functions and control over auditory novelty processing in musically trained school-aged children and adolescents. Taken together, these studies provide evidence for the causal role of formal musical training and less formal musical activities in shaping the development of important neural auditory skills and suggest transfer effects with domain-general implications. © 2015 New York Academy of Sciences.
Shrem, Talia; Murray, Micah M; Deouell, Leon Y
2017-11-01
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.
Ingham, N J; Thornton, S K; McCrossan, D; Withington, D J
1998-12-01
Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus. J. Neurophysiol. 80: 2941-2953, 1998. The mammalian superior colliculus (SC) is a complex area of the midbrain in terms of anatomy, physiology, and neurochemistry. The SC bears representations of the major sensory modalites integrated with a motor output system. It is implicated with saccade generation, in behavioral responses to novel sensory stimuli and receives innervation from diverse regions of the brain using many neurotransmitter classes. Ethylene-vinyl acetate copolymer (Elvax-40W polymer) was used here to deliver chronically neurotransmitter receptor antagonists to the SC of the guinea pig to investigate the potential role played by the major neurotransmitter systems in the collicular representation of auditory space. Slices of polymer containing different drugs were implanted onto the SC of guinea pigs before the development of the SC azimuthal auditory space map, at approximately 20 days after birth (DAB). A further group of animals was exposed to aminophosphonopentanoic acid (AP5) at approximately 250 DAB. Azimuthal spatial tuning properties of deep layer multiunits of anesthetized guinea pigs were examined approximately 20 days after implantation of the Elvax polymer. Broadband noise bursts were presented to the animals under anechoic, free-field conditions. Neuronal responses were used to construct polar plots representative of the auditory spatial multiunit receptive fields (MURFs). Animals exposed to control polymer could develop a map of auditory space in the SC comparable with that seen in unimplanted normal animals. Exposure of the SC of young animals to AP5, 6-cyano-7-nitroquinoxaline-2,3-dione, or atropine, resulted in a reduction in the proportion of spatially tuned responses with an increase in the proportion of broadly tuned responses and a degradation in topographic order. Thus N-methyl--aspartate (NMDA) and non-NMDA glutamate receptors and muscarinic acetylcholine receptors appear to play vital roles in the development of the SC auditory space map. A group of animals exposed to AP5 beginning at approximately 250 DAB produced results very similar to those obtained in the young group exposed to AP5. Thus NMDA glutamate receptors also seem to be involved in the maintenance of the SC representation of auditory space in the adult guinea pig. Exposure of the SC of young guinea pigs to gamma-aminobutyric acid (GABA) receptor blocking agents produced some but not total disruption of the spatial tuning of auditory MURFs. Receptive fields were large compared with controls, but a significant degree of topographical organization was maintained. GABA receptors may play a role in the development of fine tuning and sharpening of auditory spatial responses in the SC but not necessarily in the generation of topographical order of the these responses.
Lee, Sang-Yeon; Nam, Dong Woo; Koo, Ja-Won; De Ridder, Dirk; Vanneste, Sven; Song, Jae-Jin
2017-10-01
Recent studies have adopted the Bayesian brain model to explain the generation of tinnitus in subjects with auditory deafferentation. That is, as the human brain works in a Bayesian manner to reduce environmental uncertainty, missing auditory information due to hearing loss may cause auditory phantom percepts, i.e., tinnitus. This type of deafferentation-induced auditory phantom percept should be preceded by auditory experience because the fill-in phenomenon, namely tinnitus, is based upon auditory prediction and the resultant prediction error. For example, a recent animal study observed the absence of tinnitus in cats with congenital single-sided deafness (SSD; Eggermont and Kral, Hear Res 2016). However, no human studies have investigated the presence and characteristics of tinnitus in subjects with congenital SSD. Thus, the present study sought to reveal differences in the generation of tinnitus between subjects with congenital SSD and those with acquired SSD to evaluate the replicability of previous animal studies. This study enrolled 20 subjects with congenital SSD and 44 subjects with acquired SSD and examined the presence and characteristics of tinnitus in the groups. None of the 20 subjects with congenital SSD perceived tinnitus on the affected side, whereas 30 of 44 subjects with acquired SSD experienced tinnitus on the affected side. Additionally, there were significant positive correlations between tinnitus characteristics and the audiometric characteristics of the SSD. In accordance with the findings of the recent animal study, tinnitus was absent in subjects with congenital SSD, but relatively frequent in subjects with acquired SSD, which suggests that the development of tinnitus should be preceded by auditory experience. In other words, subjects with profound congenital peripheral deafferentation do not develop auditory phantom percepts because no auditory predictions are available from the Bayesian brain. Copyright © 2017 Elsevier B.V. All rights reserved.
Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst
NASA Technical Reports Server (NTRS)
Noohi, Fatemeh; Kinnaird, Catherine; Wood, Scott; Bloomberg, Jacob; Mulavara, Ajitkumar; Seidler, Rachael
2014-01-01
The aim of the current study was to characterize the brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit saccular Vestibular Evoked Myogenic Potentials (VEMP) (Colebatch & Halmagyi 1992; Colebatch et al. 1994). Some researchers have reported that airconducted skull tap elicits both saccular and utricle VEMPs, while being faster and less irritating for the subjects (Curthoys et al. 2009, Wackym et al., 2012). However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying the vestibular disorders related to otolith deficits. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, pre and post central gyri, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation (Bottini et al., 1994; Dieterich et al., 2003; Emri et al., 2003; Schlindwein et al., 2008; Janzen et al., 2008). Here we hypothesized that the skull tap elicits the similar pattern of cortical activity as the auditory tone burst. Subjects put on a set of MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in supine position, with eyes closed. All subjects received both forms of the stimulation, however, the order of stimulation with auditory tone burst and air-conducted skull tap was counterbalanced across subjects. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular cortex, resulting in vestibular response (Halmagyi et al., 1995). Auditory tone bursts were also delivered for comparison. To validate our stimulation method, we measured the ocular VEMP outside of the scanner. This measurement showed that both skull tap and auditory tone burst elicited vestibular evoked activation, indicated by eye muscle response. Our preliminary analyses showed that the skull tap elicited activation in medial frontal gyrus, superior temporal gyrus, postcentral gyrus, transverse temporal gyrus, anterior cingulate, and putamen. The auditory tone bursts elicited activation in medial frontal gyrus, superior temporal gyrus, superior frontal gyrus, precentral gyrus, inferior and superior parietal lobules. In line with our hypothesis, skull taps elicited a pattern of cortical activity closely similar to one elicited by auditory tone bursts. Further analysis will determine the extent to which the skull taps can replace the auditory tone stimulation in clinical and basic science vestibular assessments.
Finite element modeling of human brain response to football helmet impacts.
Darling, T; Muthuswamy, J; Rajan, S D
2016-10-01
The football helmet is used to help mitigate the occurrence of impact-related traumatic (TBI) and minor traumatic brain injuries (mTBI) in the game of American football. While the current helmet design methodology may be adequate for reducing linear acceleration of the head and minimizing TBI, it however has had less effect in minimizing mTBI. The objectives of this study are (a) to develop and validate a coupled finite element (FE) model of a football helmet and the human body, and (b) to assess responses of different regions of the brain to two different impact conditions - frontal oblique and crown impact conditions. The FE helmet model was validated using experimental results of drop tests. Subsequently, the integrated helmet-human body FE model was used to assess the responses of different regions of the brain to impact loads. Strain-rate, strain, and stress measures in the corpus callosum, midbrain, and brain stem were assessed. Results show that maximum strain-rates of 27 and 19 s(-1) are observed in the brain-stem and mid-brain, respectively. This could potentially lead to axonal injuries and neuronal cell death during crown impact conditions. The developed experimental-numerical framework can be used in the study of other helmet-related impact conditions.
Sumiya, Motofumi; Koike, Takahiko; Okazaki, Shuntaro; Kitada, Ryo; Sadato, Norihiro
2017-10-01
Social interactions can be facilitated by action-outcome contingency, in which self-actions result in relevant responses from others. Research has indicated that the striatal reward system plays a role in generating action-outcome contingency signals. However, the neural mechanisms wherein signals regarding self-action and others' responses are integrated to generate the contingency signal remain poorly understood. We conducted a functional MRI study to test the hypothesis that brain activity representing the self modulates connectivity between the striatal reward system and sensory regions involved in the processing of others' responses. We employed a contingency task in which participants made the listener laugh by telling jokes. Participants reported more pleasure when greater laughter followed their own jokes than those of another. Self-relevant listener's responses produced stronger activation in the medial prefrontal cortex (mPFC). Laughter was associated with activity in the auditory cortex. The ventral striatum exhibited stronger activation when participants made listeners laugh than when another did. In physio-physiological interaction analyses, the ventral striatum showed interaction effects for signals extracted from the mPFC and auditory cortex. These results support the hypothesis that the mPFC, which is implicated in self-related processing, gates sensory input associated with others' responses during value processing in the ventral striatum. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.
Berezutskaya, Julia; Freudenburg, Zachary V; Güçlü, Umut; van Gerven, Marcel A J; Ramsey, Nick F
2017-08-16
Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain. SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension. Copyright © 2017 the authors 0270-6474/17/377906-15$15.00/0.
Yamamoto, Katsura; Tabei, Kenichi; Katsuyama, Narumi; Taira, Masato; Kitamura, Ken
2017-01-01
Patients with unilateral sensorineural hearing loss (UHL) often complain of hearing difficulties in noisy environments. To clarify this, we compared brain activation in patients with UHL with that of healthy participants during speech perception in a noisy environment, using functional magnetic resonance imaging (fMRI). A pure tone of 1 kHz, or 14 monosyllabic speech sounds at 65‒70 dB accompanied by MRI scan noise at 75 dB, were presented to both ears for 1 second each and participants were instructed to press a button when they could hear the pure tone or speech sound. Based on the activation areas of healthy participants, the primary auditory cortex, the anterior auditory association areas, and the posterior auditory association areas were set as regions of interest (ROI). In each of these regions, we compared brain activity between healthy participants and patients with UHL. The results revealed that patients with right-side UHL showed different brain activity in the right posterior auditory area during perception of pure tones versus monosyllables. Clinically, left-side and right-side UHL are not presently differentiated and are similarly diagnosed and treated; however, the results of this study suggest that a lateralityspecific treatment should be chosen.
Triarhou, Lazaros C; Verina, Tatyana
2016-11-01
In 1899 a landmark paper entitled "On the musical centers of the brain" was published in Pflügers Archiv, based on work carried out in the Anatomo-Physiological Laboratory of the Neuropsychiatric Clinic of Vladimir M. Bekhterev (1857-1927) in St. Petersburg, Imperial Russia. The author of that paper was Vladimir E. Larionov (1857-1929), a military doctor and devoted brain scientist, who pursued the problem of the localization of function in the canine and human auditory cortex. His data detailed the existence of tonotopy in the temporal lobe and further demonstrated centrifugal auditory pathways emanating from the auditory cortex and directed to the opposite hemisphere and lower brain centers. Larionov's discoveries have been largely considered as findings of the Bekhterev school. Perhaps this is why there are limited resources on Larionov, especially keeping in mind his military medical career and the fact that after 1917 he just seems to have practiced otorhinolaryngology in Odessa. Larionov died two years after Bekhterev's mysterious death of 1927. The present study highlights the pioneering contributions of Larionov to auditory neuroscience, trusting that the life and work of Vladimir Efimovich will finally, and deservedly, emerge from the shadow of his celebrated master, Vladimir Mikhailovich. Copyright © 2016 Elsevier B.V. All rights reserved.
Chang, Alice Y W; Li, Faith C H; Huang, Chi-Wei; Wu, Julie C C; Dai, Kuang-Yu; Chen, Chang-Han; Li, Shau-Hsuan; Su, Chia-Hao; Wu, Re-Wen
2014-11-01
Pressor response after stroke commonly leads to early death or susceptibility to stroke recurrence, and detailed mechanisms are still lacking. We assessed the hypothesis that the renin-angiotensin system contributes to pressor response after stroke by differential modulation of the pro-inflammatory chemokine monocyte chemoattractant protein-1 (MCP-1) in the rostral ventrolateral medulla (RVLM), a key brain stem site that maintains blood pressure. We also investigated the beneficial effects of a novel renin inhibitor, aliskiren, against stroke-elicited pressor response. Experiments were performed in male adult Sprague-Dawley rats. Stroke induced by middle cerebral artery occlusion elicited significant pressor response, accompanied by activation of angiotensin II (Ang II)/type I receptor (AT1R) and AT2R signaling, depression of Ang-(1-7)/MasR and Ang IV/AT4R cascade, alongside augmentation of MCP-1/C-C chemokine receptor 2 (CCR2) signaling and neuroinflammation in the RVLM. Stroke-elicited pressor response was significantly blunted by antagonism of AT1R, AT2R or MCP-1/CCR2 signaling, and eliminated by applying Ang-(1-7) or Ang IV into the RVLM. Furthermore, stroke-activated MCP-1/CCR2 signaling was enhanced by AT1R and AT2R activation, and depressed by Ang-(1-7)/MasR and Ang IV/AT4R cascade. Aliskiren inhibited stroke-elicited pressor response via downregulating MCP-1/CCR2 activity and reduced neuroinflammation in the RVLM; these effects were potentiated by Ang-(1-7) or Ang IV. We conclude that whereas Ang II/AT1R or Ang II/AT2R signaling in the brain stem enhances, Ang-(1-7)/MasR or Ang IV/AT4R antagonizes pressor response after stroke by differential modulations of MCP-1 in the RVLM. Furthermore, combined administration of aliskiren and Ang-(1-7) or Ang IV into the brain stem provides more effective amelioration of stroked-induced pressor response. Copyright © 2014 Elsevier Inc. All rights reserved.
Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z
2018-05-15
Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.
Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence
2017-09-25
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Horacek, Jiri; Brunovsky, Martin; Novak, Tomas; Skrdlantova, Lucie; Klirova, Monika; Bubenikova-Valesova, Vera; Krajca, Vladimir; Tislerova, Barbora; Kopecek, Milan; Spaniel, Filip; Mohr, Pavel; Höschl, Cyril
2007-01-01
Auditory hallucinations are characteristic symptoms of schizophrenia with high clinical importance. It was repeatedly reported that low frequency (
Halder, S; Käthner, I; Kübler, A
2016-02-01
Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
[Forensic application of brainstem auditory evoked potential in patients with brain concussion].
Zheng, Xing-Bin; Li, Sheng-Yan; Huang, Si-Xing; Ma, Ke-Xin
2008-12-01
To investigate changes of brainstem auditory evoked potential (BAEP) in patients with brain concussion. Nineteen patients with brain concussion were studied with BAEP examination. The data was compared to the healthy persons reported in literatures. The abnormal rate of BAEP for patients with brain concussion was 89.5%. There was a statistically significant difference between the abnormal rate of patients and that of healthy persons (P<0.05). The abnormal rate of BAEP in the brainstem pathway for patients with brain concussion was 73.7%, indicating dysfunction of the brainstem in those patients. BAEP might be helpful in forensic diagnosis of brain concussion.
Single Trial Brain Electrical Patterns of an Auditory and Visual Perceptuomotor Task.
1983-06-01
perceptual and cognitive processing was completed, a motor program common to both tasks was executed, regardless of differences in the stimuli or type...investigation and conclusions concerning these issues during the developmental program of 12 pilot recordings is reported in the following section. B. Task...number on the CRT screen (duration 375 msec) I second after completion of response as determined by the program . If the response was sufficiently accurate
Topographic mapping of a hierarchy of temporal receptive windows using a narrated story
Lerner, Y.; Honey, C.J.; Silbert, L.J.; Hasson, U.
2011-01-01
Real life activities, such as watching a movie or engaging in conversation, unfold over many minutes. In the course of such activities the brain has to integrate information over multiple time scales. We recently proposed that the brain uses similar strategies for integrating information across space and over time. Drawing a parallel with spatial receptive fields (SRF), we defined the temporal receptive window(TRW) of a cortical microcircuit as the length of time prior to a response during which sensory information may affect that response. Our previous findings in the visual system are consistent with the hypothesis that TRWs become larger when moving from low-level sensory to high-level perceptual and cognitive areas. In this study, we mapped TRWs in auditory and language areas by measuring fMRI activity in subjects listening to a real life story scrambled at the time scales of words, sentences and paragraphs. Our results revealed a hierarchical topography of TRWs. In early auditory cortices (A1+), brain responses were driven mainly by the momentary incoming input and were similarly reliable across all scrambling conditions. In areas with an intermediate TRW, coherent information at the sentence time scale or longer was necessary to evoke reliable responses. At the apex of the TRW hierarchy we found parietal and frontal areas which responded reliably only when intact paragraphs were heard in a meaningful sequence. These results suggest that the time scale of processing is a functional property that may provide a general organizing principle for the human cerebral cortex. PMID:21414912
Kantrowitz, Joshua T.; Epstein, Michael L.; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M.; Revheim, Nadine; Lehrfeld, Nayla P.; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C.
2016-01-01
Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time–frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908. Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. PMID:27913408
Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter
2002-12-01
Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.
Zong, Liang; Guan, Jing; Ealy, Megan; Zhang, Qiujing; Wang, Dayong; Wang, Hongyang; Zhao, Yali; Shen, Zhirong; Campbell, Colleen A; Wang, Fengchao; Yang, Ju; Sun, Wei; Lan, Lan; Ding, Dalian; Xie, Linyi; Qi, Yue; Lou, Xin; Huang, Xusheng; Shi, Qiang; Chang, Suhua; Xiong, Wenping; Yin, Zifang; Yu, Ning; Zhao, Hui; Wang, Jun; Wang, Jing; Salvi, Richard J; Petit, Christine; Smith, Richard J H; Wang, Qiuju
2015-01-01
Background Auditory neuropathy spectrum disorder (ANSD) is a form of hearing loss in which auditory signal transmission from the inner ear to the auditory nerve and brain stem is distorted, giving rise to speech perception difficulties beyond that expected for the observed degree of hearing loss. For many cases of ANSD, the underlying molecular pathology and the site of lesion remain unclear. The X-linked form of the condition, AUNX1, has been mapped to Xq23-q27.3, although the causative gene has yet to be identified. Methods We performed whole-exome sequencing on DNA samples from the AUNX1 family and another small phenotypically similar but unrelated ANSD family. Results We identified two missense mutations in AIFM1 in these families: c.1352G>A (p.R451Q) in the AUNX1 family and c.1030C>T (p.L344F) in the second ANSD family. Mutation screening in a large cohort of 3 additional unrelated families and 93 sporadic cases with ANSD identified 9 more missense mutations in AIFM1. Bioinformatics analysis and expression studies support this gene as being causative of ANSD. Conclusions Variants in AIFM1 gene are a common cause of familial and sporadic ANSD and provide insight into the expanded spectrum of AIFM1-associated diseases. The finding of cochlear nerve hypoplasia in some patients was AIFM1-related ANSD implies that MRI may be of value in localising the site of lesion and suggests that cochlea implantation in these patients may have limited success. PMID:25986071
Partially Overlapping Brain Networks for Singing and Cello Playing.
Segado, Melanie; Hollinger, Avrum; Thibodeau, Joseph; Penhune, Virginia; Zatorre, Robert J
2018-01-01
This research uses an MR-Compatible cello to compare functional brain activation during singing and cello playing within the same individuals to determine the extent to which arbitrary auditory-motor associations, like those required to play the cello, co-opt functional brain networks that evolved for singing. Musical instrument playing and singing both require highly specific associations between sounds and movements. Because these are both used to produce musical sounds, it is often assumed in the literature that their neural underpinnings are highly similar. However, singing is an evolutionarily old human trait, and the auditory-motor associations used for singing are also used for speech and non-speech vocalizations. This sets it apart from the arbitrary auditory-motor associations required to play musical instruments. The pitch range of the cello is similar to that of the human voice, but cello playing is completely independent of the vocal apparatus, and can therefore be used to dissociate the auditory-vocal network from that of the auditory-motor network. While in the MR-Scanner, 11 expert cellists listened to and subsequently produced individual tones either by singing or cello playing. All participants were able to sing and play the target tones in tune (<50C deviation from target). We found that brain activity during cello playing directly overlaps with brain activity during singing in many areas within the auditory-vocal network. These include primary motor, dorsal pre-motor, and supplementary motor cortices (M1, dPMC, SMA),the primary and periprimary auditory cortices within the superior temporal gyrus (STG) including Heschl's gyrus, anterior insula (aINS), anterior cingulate cortex (ACC), and intraparietal sulcus (IPS), and Cerebellum but, notably, exclude the periaqueductal gray (PAG) and basal ganglia (Putamen). Second, we found that activity within the overlapping areas is positively correlated with, and therefore likely contributing to, both singing and playing in tune determined with performance measures. Third, we found that activity in auditory areas is functionally connected with activity in dorsal motor and pre-motor areas, and that the connectivity between them is positively correlated with good performance on this task. This functional connectivity suggests that the brain areas are working together to contribute to task performance and not just coincidently active. Last, our findings showed that cello playing may directly co-opt vocal areas (including larynx area of motor cortex), especially if musical training begins before age 7.
Partially Overlapping Brain Networks for Singing and Cello Playing
Segado, Melanie; Hollinger, Avrum; Thibodeau, Joseph; Penhune, Virginia; Zatorre, Robert J.
2018-01-01
This research uses an MR-Compatible cello to compare functional brain activation during singing and cello playing within the same individuals to determine the extent to which arbitrary auditory-motor associations, like those required to play the cello, co-opt functional brain networks that evolved for singing. Musical instrument playing and singing both require highly specific associations between sounds and movements. Because these are both used to produce musical sounds, it is often assumed in the literature that their neural underpinnings are highly similar. However, singing is an evolutionarily old human trait, and the auditory-motor associations used for singing are also used for speech and non-speech vocalizations. This sets it apart from the arbitrary auditory-motor associations required to play musical instruments. The pitch range of the cello is similar to that of the human voice, but cello playing is completely independent of the vocal apparatus, and can therefore be used to dissociate the auditory-vocal network from that of the auditory-motor network. While in the MR-Scanner, 11 expert cellists listened to and subsequently produced individual tones either by singing or cello playing. All participants were able to sing and play the target tones in tune (<50C deviation from target). We found that brain activity during cello playing directly overlaps with brain activity during singing in many areas within the auditory-vocal network. These include primary motor, dorsal pre-motor, and supplementary motor cortices (M1, dPMC, SMA),the primary and periprimary auditory cortices within the superior temporal gyrus (STG) including Heschl's gyrus, anterior insula (aINS), anterior cingulate cortex (ACC), and intraparietal sulcus (IPS), and Cerebellum but, notably, exclude the periaqueductal gray (PAG) and basal ganglia (Putamen). Second, we found that activity within the overlapping areas is positively correlated with, and therefore likely contributing to, both singing and playing in tune determined with performance measures. Third, we found that activity in auditory areas is functionally connected with activity in dorsal motor and pre-motor areas, and that the connectivity between them is positively correlated with good performance on this task. This functional connectivity suggests that the brain areas are working together to contribute to task performance and not just coincidently active. Last, our findings showed that cello playing may directly co-opt vocal areas (including larynx area of motor cortex), especially if musical training begins before age 7. PMID:29892211
Trainor, Laurel J
2012-02-01
Evidence is presented that predictive coding is fundamental to brain function and present in early infancy. Indeed, mismatch responses to unexpected auditory stimuli are among the earliest robust cortical event-related potential responses, and have been measured in young infants in response to many types of deviation, including in pitch, timing, and melodic pattern. Furthermore, mismatch responses change quickly with specific experience, suggesting that predictive coding reflects a powerful, early-developing learning mechanism. Copyright © 2011 Elsevier B.V. All rights reserved.
Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention
Noppeney, Uta
2018-01-01
Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567