Sample records for developing auditory system

  1. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  2. Cellular and Molecular Underpinnings of Neuronal Assembly in the Central Auditory System during Mouse Development

    PubMed Central

    Di Bonito, Maria; Studer, Michèle

    2017-01-01

    During development, the organization of the auditory system into distinct functional subcircuits depends on the spatially and temporally ordered sequence of neuronal specification, differentiation, migration and connectivity. Regional patterning along the antero-posterior axis and neuronal subtype specification along the dorso-ventral axis intersect to determine proper neuronal fate and assembly of rhombomere-specific auditory subcircuits. By taking advantage of the increasing number of transgenic mouse lines, recent studies have expanded the knowledge of developmental mechanisms involved in the formation and refinement of the auditory system. Here, we summarize several findings dealing with the molecular and cellular mechanisms that underlie the assembly of central auditory subcircuits during mouse development, focusing primarily on the rhombomeric and dorso-ventral origin of auditory nuclei and their associated molecular genetic pathways. PMID:28469562

  3. An acoustic gap between the NICU and womb: a potential risk for compromised neuroplasticity of the auditory system in preterm infants.

    PubMed

    Lahav, Amir; Skoe, Erika

    2014-01-01

    The intrauterine environment allows the fetus to begin hearing low-frequency sounds in a protected fashion, ensuring initial optimal development of the peripheral and central auditory system. However, the auditory nursery provided by the womb vanishes once the preterm newborn enters the high-frequency (HF) noisy environment of the neonatal intensive care unit (NICU). The present article draws a concerning line between auditory system development and HF noise in the NICU, which we argue is not necessarily conducive to fostering this development. Overexposure to HF noise during critical periods disrupts the functional organization of auditory cortical circuits. As a result, we theorize that the ability to tune out noise and extract acoustic information in a noisy environment may be impaired, leading to increased risks for a variety of auditory, language, and attention disorders. Additionally, HF noise in the NICU often masks human speech sounds, further limiting quality exposure to linguistic stimuli. Understanding the impact of the sound environment on the developing auditory system is an important first step in meeting the developmental demands of preterm newborns undergoing intensive care.

  4. Postembryonic development of the auditory system of the cicada Okanagana rimosa (Say) (Homoptera: Auchenorrhyncha: Cicadidae).

    PubMed

    Strauss, Johannes; Lakes-Harlan, Reinhard

    2009-01-01

    Cicadas (Homoptera: Auchenorrhyncha: Cicadidae) use acoustic signalling for mate attraction and perceive auditory signals by a tympanal organ in the second abdominal segment. The main structural features of the ear are the tympanum, the sensory organ consisting of numerous scolopidial cells, and the cuticular link between sensory neurones and tympanum (tympanal ridge and apodeme). Here, a first investigation of the postembryonic development of the auditory system is presented. In insects, sensory neurones usually differentiate during embryogenesis, and sound-perceiving structures form during postembryogenesis. Cicadas have an elongated and subterranian postembryogenesis which can take several years until the final moult. The neuroanatomy and functional morphology of the auditory system of the cicada Okanagana rimosa (Say) are documented for the adult and the three last larval stages. The sensory organ and the projection of sensory afferents to the CNS are present in the earliest stages investigated. The cuticular structures of the tympanum, the tympanal frame holding the tympanum, and the tympanal ridge differentiate in the later stages of postembryogenesis. Thus, despite the different life styles of larvae and adults, the neuronal components of the cicada auditory system develop already during embryogenesis or early postembryogenesis, and sound-perceiving structures like tympana are elaborated later in postembryogenesis. The life cycle allows comparison of cicada development to other hemimetabolous insects with respect to the influence of specially adapted life cycle stages on auditory maturation. The neuronal development of the auditory system conforms to the timing in other hemimetabolous insects.

  5. [Which colours can we hear?: light stimulation of the hearing system].

    PubMed

    Wenzel, G I; Lenarz, T; Schick, B

    2014-02-01

    The success of conventional hearing aids and electrical auditory prostheses for hearing impaired patients is still limited in noisy environments and for sounds more complex than speech (e. g. music). This is partially due to the difficulty of frequency-specific activation of the auditory system using these devices. Stimulation of the auditory system using light pulses represents an alternative to mechanical and electrical stimulation. Light is a source of energy that can be very exactly focused and applied with little scattering, thus offering perspectives for optimal activation of the auditory system. Studies investigating light stimulation of sectors along the auditory pathway have shown stimulation of the auditory system is possible using light pulses. However, further studies and developments are needed before a new generation of light stimulation-based auditory prostheses can be made available for clinical application.

  6. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    PubMed

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  7. Developmental and cross-modal plasticity in deafness: evidence from the P1 and N1 event related potentials in cochlear implanted children.

    PubMed

    Sharma, Anu; Campbell, Julia; Cardon, Garrett

    2015-02-01

    Cortical development is dependent on extrinsic stimulation. As such, sensory deprivation, as in congenital deafness, can dramatically alter functional connectivity and growth in the auditory system. Cochlear implants ameliorate deprivation-induced delays in maturation by directly stimulating the central nervous system, and thereby restoring auditory input. The scenario in which hearing is lost due to deafness and then reestablished via a cochlear implant provides a window into the development of the central auditory system. Converging evidence from electrophysiologic and brain imaging studies of deaf animals and children fitted with cochlear implants has allowed us to elucidate the details of the time course for auditory cortical maturation under conditions of deprivation. Here, we review how the P1 cortical auditory evoked potential (CAEP) provides useful insight into sensitive period cut-offs for development of the primary auditory cortex in deaf children fitted with cochlear implants. Additionally, we present new data on similar sensitive period dynamics in higher-order auditory cortices, as measured by the N1 CAEP in cochlear implant recipients. Furthermore, cortical re-organization, secondary to sensory deprivation, may take the form of compensatory cross-modal plasticity. We provide new case-study evidence that cross-modal re-organization, in which intact sensory modalities (i.e., vision and somatosensation) recruit cortical regions associated with deficient sensory modalities (i.e., auditory) in cochlear implanted children may influence their behavioral outcomes with the implant. Improvements in our understanding of developmental neuroplasticity in the auditory system should lead to harnessing central auditory plasticity for superior clinical technique. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Talking back: Development of the olivocochlear efferent system.

    PubMed

    Frank, Michelle M; Goodrich, Lisa V

    2018-06-26

    Developing sensory systems must coordinate the growth of neural circuitry spanning from receptors in the peripheral nervous system (PNS) to multilayered networks within the central nervous system (CNS). This breadth presents particular challenges, as nascent processes must navigate across the CNS-PNS boundary and coalesce into a tightly intermingled wiring pattern, thereby enabling reliable integration from the PNS to the CNS and back. In the auditory system, feedforward spiral ganglion neurons (SGNs) from the periphery collect sound information via tonotopically organized connections in the cochlea and transmit this information to the brainstem for processing via the VIII cranial nerve. In turn, feedback olivocochlear neurons (OCNs) housed in the auditory brainstem send projections into the periphery, also through the VIII nerve. OCNs are motor neuron-like efferent cells that influence auditory processing within the cochlea and protect against noise damage in adult animals. These aligned feedforward and feedback systems develop in parallel, with SGN central axons reaching the developing auditory brainstem around the same time that the OCN axons extend out toward the developing inner ear. Recent findings have begun to unravel the genetic and molecular mechanisms that guide OCN development, from their origins in a generic pool of motor neuron precursors to their specialized roles as modulators of cochlear activity. One recurrent theme is the importance of efferent-afferent interactions, as afferent SGNs guide OCNs to their final locations within the sensory epithelium, and efferent OCNs shape the activity of the developing auditory system. This article is categorized under: Nervous System Development > Vertebrates: Regional Development. © 2018 Wiley Periodicals, Inc.

  9. Auditory processing deficits in growth restricted fetuses affect later language development.

    PubMed

    Kisilevsky, Barbara S; Davies, Gregory A L

    2007-01-01

    An increased risk for language deficits in infants born growth restricted has been reported in follow-up studies for more than 20 years, suggesting a relation between fetal auditory system development and later language learning. Work with animal models indicate that there are at least two ways in which growth restriction could affect the development of auditory perception in human fetuses: a delay in myelination or conduction and an increase in sensorineural threshold. Systematic study of auditory function in growth restricted human fetuses has not been reported. However, results of studies employing low-risk fetuses delivering as healthy full-term infants demonstrate that, by late gestation, the fetus can hear, sound properties modulate behavior, and sensory information is available from both inside (e.g., maternal vascular) and outside (e.g., noise, voices, music) of the maternal body. These data provide substantive evidence that the auditory system is functioning and that environmental sounds are available for shaping neural networks and laying the foundation for language acquisition before birth. We hypothesize that fetal growth restriction affects auditory system development, resulting in atypical auditory information processing in growth restricted fetuses compared to healthy, appropriately-grown-for-gestational-age fetuses. Speech perception that lays the foundation for later language competence will differ in growth restricted compared to normally grown fetuses and be associated with later language abilities.

  10. A novel hybrid auditory BCI paradigm combining ASSR and P300.

    PubMed

    Kaongoen, Netiwit; Jo, Sungho

    2017-03-01

    Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Web-based auditory self-training system for adult and elderly users of hearing aids.

    PubMed

    Vitti, Simone Virginia; Blasca, Wanderléia Quinhoneiro; Sigulem, Daniel; Torres Pisa, Ivan

    2015-01-01

    Adults and elderly users of hearing aids suffer psychosocial reactions as a result of hearing loss. Auditory rehabilitation is typically carried out with support from a speech therapist, usually in a clinical center. For these cases, there is a lack of computer-based self-training tools for minimizing the psychosocial impact of hearing deficiency. To develop and evaluate a web-based auditory self-training system for adult and elderly users of hearing aids. Two modules were developed for the web system: an information module based on guidelines for using hearing aids; and an auditory training module presenting a sequence of training exercises for auditory abilities along the lines of the auditory skill steps within auditory processing. We built aweb system using PHP programming language and a MySQL database .from requirements surveyed through focus groups that were conducted by healthcare information technology experts. The web system was evaluated by speech therapists and hearing aid users. An initial sample of 150 patients at DSA/HRAC/USP was defined to apply the system with the inclusion criteria that: the individuals should be over the age of 25 years, presently have hearing impairment, be a hearing aid user, have a computer and have internet experience. They were divided into two groups: a control group (G1) and an experimental group (G2). These patients were evaluated clinically using the HHIE for adults and HHIA for elderly people, before and after system implementation. A third web group was formed with users who were invited through social networks for their opinions on using the system. A questionnaire evaluating hearing complaints was given to all three groups. The study hypothesis considered that G2 would present greater auditory perception, higher satisfaction and fewer complaints than G1 after the auditory training. It was expected that G3 would have fewer complaints regarding use and acceptance of the system. The web system, which was named SisTHA portal, was finalized, rated by experts and hearing aid users and approved for use. The system comprised auditory skills training along five lines: discrimination; recognition; comprehension and temporal sequencing; auditory closure; and cognitive-linguistic and communication strategies. Users needed to undergo auditory training over a minimum period of 1 month: 5 times a week for 30 minutes a day. Comparisons were made between G1 and G2 and web system use by G3. The web system developed was approved for release to hearing aid users. It is expected that the self-training will help improve effective use of hearing aids, thereby decreasing their rejection.

  12. System and algorithm for evaluation of human auditory analyzer state

    NASA Astrophysics Data System (ADS)

    Bachynskiy, Mykhaylo V.; Azarkhov, Oleksandr Yu.; Shtofel, Dmytro Kh.; Horbatiuk, Svitlana M.; Ławicki, Tomasz; Kalizhanova, Aliya; Smailova, Saule; Askarova, Nursanat

    2017-08-01

    The paper discusses questions of human auditory state evaluation with technical means. It considers the disadvantages of existing clinical audiometry methods and systems. It is proposed to use method for evaluating of auditory analyzer state by means of pulsometry to get the medical study more objective and efficient. It provides for use of two optoelectronic sensors located on the carotid artery and ear lobe, Using this method the biotechnical system for evaluation and stimulation of human auditory analyzer stare wad developed. Its hardware and software were substantiated. Different modes of simulation in the designed system were tested and the influence of the procedure on a patient was studied.

  13. Estrogenic modulation of auditory processing: a vertebrate comparison

    PubMed Central

    Caras, Melissa L.

    2013-01-01

    Sex-steroid hormones are well-known regulators of vocal motor behavior in several organisms. A large body of evidence now indicates that these same hormones modulate processing at multiple levels of the ascending auditory pathway. The goal of this review is to provide a comparative analysis of the role of estrogens in vertebrate auditory function. Four major conclusions can be drawn from the literature: First, estrogens may influence the development of the mammalian auditory system. Second, estrogenic signaling protects the mammalian auditory system from noise- and age-related damage. Third, estrogens optimize auditory processing during periods of reproductive readiness in multiple vertebrate lineages. Finally, brain-derived estrogens can act locally to enhance auditory response properties in at least one avian species. This comparative examination may lead to a better appreciation of the role of estrogens in the processing of natural vocalizations and may provide useful insights toward alleviating auditory dysfunctions emanating from hormonal imbalances. PMID:23911849

  14. A virtual display system for conveying three-dimensional acoustic information

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Wightman, Frederic L.; Foster, Scott H.

    1988-01-01

    The development of a three-dimensional auditory display system is discussed. Theories of human sound localization and techniques for synthesizing various features of auditory spatial perceptions are examined. Psychophysical data validating the system are presented. The human factors applications of the system are considered.

  15. Neuroplasticity in the auditory system.

    PubMed

    Gil-Loyzaga, P

    2005-01-01

    An increasing interest on neuroplasticity and nerve regeneration within the auditory receptor and pathway has developed in recent years. The receptor and the auditory pathway are controlled by highly complex circuits that appear during embryonic development. During this early maturation process of the auditory sensory elements, we observe the development of two types of nerve fibers: permanent fibers that will remain to reach full-term maturity and other transient fibers that will ultimately disappear. Both stable and transitory fibers however, as well as developing sensory cells, express, and probably release, their respective neuro-transmitters that could be involved in neuroplasticity. Cell culture experiments have added significant information; the in vitro administration of glutamate or GABA to isolated spiral ganglion neurons clearly modified neural development. Neuroplasticity has been also found in the adult. Nerve regeneration and neuroplasticity have been demonstrated in the adult auditory receptors as well as throughout the auditory pathway. Neuroplasticity studies could prove interesting in the elaboration of current or future therapy strategies (e.g.: cochlear implants or stem cells), but also to really understand the pathogenesis of auditory or language diseases (e.g.: deafness, tinnitus, dyslexia, etc.).

  16. A Dynamic Compressive Gammachirp Auditory Filterbank

    PubMed Central

    Irino, Toshio; Patterson, Roy D.

    2008-01-01

    It is now common to use knowledge about human auditory processing in the development of audio signal processors. Until recently, however, such systems were limited by their linearity. The auditory filter system is known to be level-dependent as evidenced by psychophysical data on masking, compression, and two-tone suppression. However, there were no analysis/synthesis schemes with nonlinear filterbanks. This paper describe18300060s such a scheme based on the compressive gammachirp (cGC) auditory filter. It was developed to extend the gammatone filter concept to accommodate the changes in psychophysical filter shape that are observed to occur with changes in stimulus level in simultaneous, tone-in-noise masking. In models of simultaneous noise masking, the temporal dynamics of the filtering can be ignored. Analysis/synthesis systems, however, are intended for use with speech sounds where the glottal cycle can be long with respect to auditory time constants, and so they require specification of the temporal dynamics of auditory filter. In this paper, we describe a fast-acting level control circuit for the cGC filter and show how psychophysical data involving two-tone suppression and compression can be used to estimate the parameter values for this dynamic version of the cGC filter (referred to as the “dcGC” filter). One important advantage of analysis/synthesis systems with a dcGC filterbank is that they can inherit previously refined signal processing algorithms developed with conventional short-time Fourier transforms (STFTs) and linear filterbanks. PMID:19330044

  17. An initial survey of national airspace system auditory alarm issues in terminal air traffic control.

    DOT National Transportation Integrated Search

    2003-04-01

    A researcher from the Research Development & Human Factors Laboratory of the William J. Hughes Technical Center conducted an exploratory study to examine current National Airspace System (NAS) auditory alarm issues. The purpose was to identify proble...

  18. A longitudinal study of auditory evoked field and language development in young children.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Ueno, Sanae; Shitamichi, Kiyomi; Remijn, Gerard B; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Furutani, Naoki; Oi, Manabu; Munesue, Toshio; Tsubokawa, Tsunehisa; Higashida, Haruhiro; Minabe, Yoshio

    2014-11-01

    The relationship between language development in early childhood and the maturation of brain functions related to the human voice remains unclear. Because the development of the auditory system likely correlates with language development in young children, we investigated the relationship between the auditory evoked field (AEF) and language development using non-invasive child-customized magnetoencephalography (MEG) in a longitudinal design. Twenty typically developing children were recruited (aged 36-75 months old at the first measurement). These children were re-investigated 11-25 months after the first measurement. The AEF component P1m was examined to investigate the developmental changes in each participant's neural brain response to vocal stimuli. In addition, we examined the relationships between brain responses and language performance. P1m peak amplitude in response to vocal stimuli significantly increased in both hemispheres in the second measurement compared to the first measurement. However, no differences were observed in P1m latency. Notably, our results reveal that children with greater increases in P1m amplitude in the left hemisphere performed better on linguistic tests. Thus, our results indicate that P1m evoked by vocal stimuli is a neurophysiological marker for language development in young children. Additionally, MEG is a technique that can be used to investigate the maturation of the auditory cortex based on auditory evoked fields in young children. This study is the first to demonstrate a significant relationship between the development of the auditory processing system and the development of language abilities in young children. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Method for Dissecting the Auditory Epithelium (Basilar Papilla) in Developing Chick Embryos.

    PubMed

    Levic, Snezana; Yamoah, Ebenezer N

    2016-01-01

    Chickens are an invaluable model for exploring auditory physiology. Similar to humans, the chicken inner ear is morphologically and functionally close to maturity at the time of hatching. In contrast, chicks can regenerate hearing, an ability lost in all mammals, including humans. The extensive morphological, physiological, behavioral, and pharmacological data available, regarding normal development in the chicken auditory system, has driven the progress of the field. The basilar papilla is an attractive model system to study the developmental mechanisms of hearing. Here, we describe the dissection technique for isolating the basilar papilla in developing chick inner ear. We also provide detailed examples of physiological (patch clamping) experiments using this preparation.

  20. Connecting the ear to the brain: molecular mechanisms of auditory circuit assembly

    PubMed Central

    Appler, Jessica M.; Goodrich, Lisa V.

    2011-01-01

    Our sense of hearing depends on precisely organized circuits that allow us to sense, perceive, and respond to complex sounds in our environment, from music and language to simple warning signals. Auditory processing begins in the cochlea of the inner ear, where sounds are detected by sensory hair cells and then transmitted to the central nervous system by spiral ganglion neurons, which faithfully preserve the frequency, intensity, and timing of each stimulus. During the assembly of auditory circuits, spiral ganglion neurons establish precise connections that link hair cells in the cochlea to target neurons in the auditory brainstem, develop specific firing properties, and elaborate unusual synapses both in the periphery and in the CNS. Understanding how spiral ganglion neurons acquire these unique properties is a key goal in auditory neuroscience, as these neurons represent the sole input of auditory information to the brain. In addition, the best currently available treatment for many forms of deafness is the cochlear implant, which compensates for lost hair cell function by directly stimulating the auditory nerve. Historically, studies of the auditory system have lagged behind other sensory systems due to the small size and inaccessibility of the inner ear. With the advent of new molecular genetic tools, this gap is narrowing. Here, we summarize recent insights into the cellular and molecular cues that guide the development of spiral ganglion neurons, from their origin in the proneurosensory domain of the otic vesicle to the formation of specialized synapses that ensure rapid and reliable transmission of sound information from the ear to the brain. PMID:21232575

  1. Auditory evoked potentials in children and adolescents with Down syndrome.

    PubMed

    Gregory, Letícia; Rosa, Rafael F M; Zen, Paulo R G; Sleifer, Pricila

    2018-01-01

    Down syndrome, or trisomy 21, is the most common genetic alteration in humans. The syndrome presents with several features, including hearing loss and changes in the central nervous system, which may affect language development in children and lead to school difficulties. The present study aimed to investigate group differences in the central auditory system by long-latency auditory evoked potentials and cognitive potential. An assessment of 23 children and adolescents with Down syndrome was performed, and a control group composed of 43 children and adolescents without genetic and/or neurological changes was used for comparison. All children underwent evaluation with pure tone and vocal audiometry, acoustic immitance measures, long-latency auditory evoked potentials, and cognitive potential. Longer latencies of the waves were found in the Down syndrome group than the control group, without significant differences in amplitude, suggesting that individuals with Down syndrome have difficulty in discrimination and auditory memory. It is, therefore, important to stimulate and monitor these children in order to enable adequate development and improve their life quality. We also emphasize the importance of the application of auditory evoked potentials in clinical practice, in order to contribute to the early diagnosis of hearing alterations and the development of more research in this area. © 2017 Wiley Periodicals, Inc.

  2. Operation Bull's Eye/ARDS (Auditory Reading Development System). Final Report.

    ERIC Educational Resources Information Center

    District of Columbia Public Schools, Washington, DC.

    The Auditory Reading Development Systems (ARDS) was devised to meet the educational needs of a segment of the model cities population that had not been reached by other programs. The ARDS is geared to teach the student whose reading level is 0.0 through 3.9, then 3.9 through 6.9, then 7.0 through 8.9. The target population is reached through…

  3. Development of sound measurement systems for auditory functional magnetic resonance imaging.

    PubMed

    Nam, Eui-Cheol; Kim, Sam Soo; Lee, Kang Uk; Kim, Sang Sik

    2008-06-01

    Auditory functional magnetic resonance imaging (fMRI) requires quantification of sound stimuli in the magnetic environment and adequate isolation of background noise. We report the development of two novel sound measurement systems that accurately measure the sound intensity inside the ear, which can simultaneously provide the similar or greater amount of scanner- noise protection than ear-muffs. First, we placed a 2.6 x 2.6-mm microphone in an insert phone that was connected to a headphone [microphone-integrated, foam-tipped insert-phone with a headphone (MIHP)]. This attenuated scanner noise by 37.8+/-4.6 dB, a level better than the reference amount obtained using earmuffs. The nonmetallic optical microphone was integrated with a headphone [optical microphone in a headphone (OMHP)] and it effectively detected the change of sound intensity caused by variable compression on the cushions of the headphone. Wearing the OMHP reduced the noise by 28.5+/-5.9 dB and did not affect echoplanar magnetic resonance images. We also performed an auditory fMRI study using the MIHP system and presented increase in the auditory cortical activation following 10-dB increment in the intensity of sound stimulation. These two newly developed sound measurement systems successfully achieved the accurate quantification of sound stimuli with maintaining the similar level of noise protection of wearing earmuffs in the auditory fMRI experiment.

  4. Protective Effects of Ginkgo biloba Extract EGb 761 against Noise Trauma-Induced Hearing Loss and Tinnitus Development

    PubMed Central

    Korn, Sabine

    2014-01-01

    Noise-induced hearing loss (NIHL) and resulting comorbidities like subjective tinnitus are common diseases in modern societies. A substance shown to be effective against NIHL in an animal model is the Ginkgo biloba extract EGb 761. Further effects of the extract on the cellular and systemic levels of the nervous system make it a promising candidate not only for protection against NIHL but also for its secondary comorbidities like tinnitus. Following an earlier study we here tested the potential effectiveness of prophylactic EGb 761 treatment against NIHL and tinnitus development in the Mongolian gerbil. We monitored the effects of EGb 761 and noise trauma-induced changes on signal processing within the auditory system by means of behavioral and electrophysiological approaches. We found significantly reduced NIHL and tinnitus development upon EGb 761 application, compared to vehicle treated animals. These protective effects of EGb 761 were correlated with changes in auditory processing, both at peripheral and central levels. We propose a model with two main effects of EGb 761 on auditory processing, first, an increase of auditory brainstem activity leading to an increased thalamic input to the primary auditory cortex (AI) and second, an asymmetric effect on lateral inhibition in AI. PMID:25028612

  5. Auditory-motor interactions in pediatric motor speech disorders: neurocomputational modeling of disordered development.

    PubMed

    Terband, H; Maassen, B; Guenther, F H; Brumberg, J

    2014-01-01

    Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs

    PubMed Central

    Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve

    2013-01-01

    Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC®, www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC®). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks. Conclusions. Our findings suggest that the gaming EEG system may prove a valid alternative to laboratory ERP systems for recording reliable late auditory ERPs (P1, N1, P2, N2, and the P3) over the frontal cortices. In the future, the gaming EEG system may also prove useful for measuring less reliable ERPs, such as the MMN, if the reliability of such ERPs can be boosted to the same level as late auditory ERPs. PMID:23638374

  7. Validation of the Emotiv EPOC(®) EEG gaming system for measuring research quality auditory ERPs.

    PubMed

    Badcock, Nicholas A; Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve

    2013-01-01

    Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants - particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC(®), www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC(®)). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks. Conclusions. Our findings suggest that the gaming EEG system may prove a valid alternative to laboratory ERP systems for recording reliable late auditory ERPs (P1, N1, P2, N2, and the P3) over the frontal cortices. In the future, the gaming EEG system may also prove useful for measuring less reliable ERPs, such as the MMN, if the reliability of such ERPs can be boosted to the same level as late auditory ERPs.

  8. Measuring the performance of visual to auditory information conversion.

    PubMed

    Tan, Shern Shiou; Maul, Tomás Henrique Bode; Mennie, Neil Russell

    2013-01-01

    Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID) and inter sound distance (ISD) whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.

  9. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

  10. The effect of noise exposure during the developmental period on the function of the auditory system.

    PubMed

    Bureš, Zbyněk; Popelář, Jiří; Syka, Josef

    2017-09-01

    Recently, there has been growing evidence that development and maturation of the auditory system depends substantially on the afferent activity supplying inputs to the developing centers. In cases when this activity is altered during early ontogeny as a consequence of, e.g., an unnatural acoustic environment or acoustic trauma, the structure and function of the auditory system may be severely affected. Pathological alterations may be found in populations of ribbon synapses of the inner hair cells, in the structure and function of neuronal circuits, or in auditory driven behavioral and psychophysical performance. Three characteristics of the developmental impairment are of key importance: first, they often persist to adulthood, permanently influencing the quality of life of the subject; second, their manifestations are different and sometimes even contradictory to the impairments induced by noise trauma in adulthood; third, they may be 'hidden' and difficult to diagnose by standard audiometric procedures used in clinical practice. This paper reviews the effects of early interventions to the auditory system, in particular, of sound exposure during ontogeny. We summarize the results of recent morphological, electrophysiological, and behavioral experiments, discuss the putative mechanisms and hypotheses, and draw possible consequences for human neonatal medicine and noise health. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Self-Operated Auditory Prompting Systems: Creating and Using Them to Support Students with Disabilities

    ERIC Educational Resources Information Center

    Savage, Melissa N.

    2014-01-01

    Some students with disabilities develop a dependence on others for support and can benefit from self-management strategies to increase independence. Self-operated auditory prompting systems are an effective self-management intervention used to increase independence for students with disabilities while continuing to provide the support that they…

  12. The function of BDNF in the adult auditory system.

    PubMed

    Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies

    2014-01-01

    The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Use of a highly transparent zebrafish mutant for investigations in the development of the vertebrate auditory system (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wisniowiecki, Anna M.; Mattison, Scott P.; Kim, Sangmin; Riley, Bruce; Applegate, Brian E.

    2016-03-01

    Zebrafish, an auditory specialist among fish, offer analogous auditory structures to vertebrates and is a model for hearing and deafness in vertebrates, including humans. Nevertheless, many questions remain on the basic mechanics of the auditory pathway. Phase-sensitive Optical Coherence Tomography has been proven as valuable technique for functional vibrometric measurements in the murine ear. Such measurements are key to building a complete understanding of auditory mechanics. The application of such techniques in the zebrafish is impeded by the high level of pigmentation, which develops superior to the transverse plane and envelops the auditory system superficially. A zebrafish double mutant for nacre and roy (mitfa-/- ;roya-/- [casper]), which exhibits defects for neural-crest derived melanocytes and iridophores, at all stages of development, is pursued to improve image quality and sensitivity for functional imaging. So far our investigations with the casper mutants have enabled the identification of the specialized hearing organs, fluid-filled canal connecting the ears, and sub-structures of the semicircular canals. In our previous work with wild-type zebrafish, we were only able to identify and observe stimulated vibration of the largest structures, specifically the anterior swim bladder and tripus ossicle, even among small, larval specimen, with fully developed inner ears. In conclusion, this genetic mutant will enable the study of the dynamics of the zebrafish ear from the early larval stages all the way into adulthood.

  14. On the Role of Auditory Feedback in Robot-Assisted Movement Training after Stroke: Review of the Literature

    PubMed Central

    Rodà, Antonio; Avanzini, Federico; Masiero, Stefano

    2013-01-01

    The goal of this paper is to address a topic that is rarely investigated in the literature of technology-assisted motor rehabilitation, that is, the integration of auditory feedback in the rehabilitation device. After a brief introduction on rehabilitation robotics, the main concepts of auditory feedback are presented, together with relevant approaches, techniques, and technologies available in this domain. Current uses of auditory feedback in the context of technology-assisted rehabilitation are then reviewed. In particular, a comparative quantitative analysis over a large corpus of the recent literature suggests that the potential of auditory feedback in rehabilitation systems is currently and largely underexploited. Finally, several scenarios are proposed in which the use of auditory feedback may contribute to overcome some of the main limitations of current rehabilitation systems, in terms of user engagement, development of acute-phase and home rehabilitation devices, learning of more complex motor tasks, and improving activities of daily living. PMID:24382952

  15. The ability to tap to a beat relates to cognitive, linguistic, and perceptual skills

    PubMed Central

    Tierney, Adam T.; Kraus, Nina

    2013-01-01

    Reading-impaired children have difficulty tapping to a beat. Here we tested whether this relationship between reading ability and synchronized tapping holds in typically-developing adolescents. We also hypothesized that tapping relates to two other abilities. First, since auditory-motor synchronization requires monitoring of the relationship between motor output and auditory input, we predicted that subjects better able to tap to the beat would perform better on attention tests. Second, since auditory-motor synchronization requires fine temporal precision within the auditory system for the extraction of a sound’s onset time, we predicted that subjects better able to tap to the beat would be less affected by backward masking, a measure of temporal precision within the auditory system. As predicted, tapping performance related to reading, attention, and backward masking. These results motivate future research investigating whether beat synchronization training can improve not only reading ability, but potentially executive function and basic auditory processing as well. PMID:23400117

  16. Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.

    PubMed

    Alvarez, Francisco Jose; Revuelta, Miren; Santaolalla, Francisco; Alvarez, Antonia; Lafuente, Hector; Arteaga, Olatz; Alonso-Alconada, Daniel; Sanchez-del-Rey, Ana; Hilario, Enrique; Martinez-Ibargüen, Agustin

    2015-01-01

    Hypoxia-ischemia (HI) is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets. Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs) of newborn piglets exposed to acute hypoxia/ischemia (n = 6) and a control group with no such exposure (n = 10). ABRs were recorded for both ears before the start of the experiment (baseline), after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury. Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant. The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.

  17. Music training for the development of auditory skills.

    PubMed

    Kraus, Nina; Chandrasekaran, Bharath

    2010-08-01

    The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.

  18. Estimating subglottal pressure via airflow interruption with auditory masking.

    PubMed

    Hoffman, Matthew R; Jiang, Jack J

    2009-11-01

    Current noninvasive measurement of subglottal pressure using airflow interruption often produces inconsistent results due to the elicitation of audio-laryngeal reflexes. Auditory feedback could be considered as a means of ensuring measurement accuracy and precision. The purpose of this study was to determine if auditory masking could be used with the airflow interruption system to improve intrasubject consistency. A prerecorded sample of subject phonation was played on a loop over headphones during the trials with auditory masking. This provided subjects with a target pitch and blocked out distracting ambient noise created by the airflow interrupter. Subglottal pressure was noninvasively measured using the airflow interruption system. Thirty subjects, divided into two equal groups, performed 10 trials without auditory masking and 10 trials with auditory masking. Group one performed the normal trials first, followed by the trials with auditory masking. Group two performed the auditory masking trials first, followed by the normal trials. Intrasubject consistency was improved by adding auditory masking, resulting in a decrease in average intrasubject standard deviation from 0.93+/-0.51 to 0.47+/-0.22 cm H(2)O (P < 0.001). Auditory masking can be used effectively to combat audio-laryngeal reflexes and aid subjects in maintaining constant glottal configuration and frequency, thereby increasing intrasubject consistency when measuring subglottal pressure. By considering auditory feedback, a more reliable method of measurement was developed. This method could be used by clinicians, as reliable, immediately available values of subglottal pressure are useful in evaluating laryngeal health and monitoring treatment progress.

  19. Current understanding of auditory neuropathy.

    PubMed

    Boo, Nem-Yun

    2008-12-01

    Auditory neuropathy is defined by the presence of normal evoked otoacoustic emissions (OAE) and absent or abnormal auditory brainstem responses (ABR). The sites of lesion could be at the cochlear inner hair cells, spiral ganglion cells of the cochlea, synapse between the inner hair cells and auditory nerve, or the auditory nerve itself. Genetic, infectious or neonatal/perinatal insults are the 3 most commonly identified underlying causes. Children usually present with delay in speech and language development while adult patients present with hearing loss and disproportionately poor speech discrimination for the degree of hearing loss. Although cochlear implant is the treatment of choice, current evidence show that it benefits only those patients with endocochlear lesions, but not those with cochlear nerve deficiency or central nervous system disorders. As auditory neuropathy is a disorder with potential long-term impact on a child's development, early hearing screen using both OAE and ABR should be carried out on all newborns and infants to allow early detection and intervention.

  20. Auditory-Motor Interactions in Pediatric Motor Speech Disorders: Neurocomputational Modeling of Disordered Development

    PubMed Central

    Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.

    2014-01-01

    Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630

  1. Pairing broadband noise with cortical stimulation induces extensive suppression of ascending sensory activity

    NASA Astrophysics Data System (ADS)

    Markovitz, Craig D.; Hogan, Patrick S.; Wesen, Kyle A.; Lim, Hubert H.

    2015-04-01

    Objective. The corticofugal system can alter coding along the ascending sensory pathway. Within the auditory system, electrical stimulation of the auditory cortex (AC) paired with a pure tone can cause egocentric shifts in the tuning of auditory neurons, making them more sensitive to the pure tone frequency. Since tinnitus has been linked with hyperactivity across auditory neurons, we sought to develop a new neuromodulation approach that could suppress a wide range of neurons rather than enhance specific frequency-tuned neurons. Approach. We performed experiments in the guinea pig to assess the effects of cortical stimulation paired with broadband noise (PN-Stim) on ascending auditory activity within the central nucleus of the inferior colliculus (CNIC), a widely studied region for AC stimulation paradigms. Main results. All eight stimulated AC subregions induced extensive suppression of activity across the CNIC that was not possible with noise stimulation alone. This suppression built up over time and remained after the PN-Stim paradigm. Significance. We propose that the corticofugal system is designed to decrease the brain’s input gain to irrelevant stimuli and PN-Stim is able to artificially amplify this effect to suppress neural firing across the auditory system. The PN-Stim concept may have potential for treating tinnitus and other neurological disorders.

  2. The Role of Age and Executive Function in Auditory Category Learning

    PubMed Central

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  3. Utilising reinforcement learning to develop strategies for driving auditory neural implants.

    PubMed

    Lee, Geoffrey W; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G

    2016-08-01

    In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model's function. We show the ability to effectively learn stimulation patterns which mimic the cochlea's ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.

  4. A behavioral framework to guide research on central auditory development and plasticity

    PubMed Central

    Sanes, Dan H.; Woolley, Sarah M. N.

    2011-01-01

    The auditory CNS is influenced profoundly by sounds heard during development. Auditory deprivation and augmented sound exposure can each perturb the maturation of neural computations as well as their underlying synaptic properties. However, we have learned little about the emergence of perceptual skills in these same model systems, and especially how perception is influenced by early acoustic experience. Here, we argue that developmental studies must take greater advantage of behavioral benchmarks. We discuss quantitative measures of perceptual development, and suggest how they can play a much larger role in guiding experimental design. Most importantly, including behavioral measures will allow us to establish empirical connections among environment, neural development, and perception. PMID:22196328

  5. Perinatal exposure to a noncoplanar polychlorinated biphenyl alters tonotopy, receptive fields, and plasticity in rat primary auditory cortex

    PubMed Central

    Kenet, T.; Froemke, R. C.; Schreiner, C. E.; Pessah, I. N.; Merzenich, M. M.

    2007-01-01

    Noncoplanar polychlorinated biphenyls (PCBs) are widely dispersed in human environment and tissues. Here, an exemplar noncoplanar PCB was fed to rat dams during gestation and throughout three subsequent nursing weeks. Although the hearing sensitivity and brainstem auditory responses of pups were normal, exposure resulted in the abnormal development of the primary auditory cortex (A1). A1 was irregularly shaped and marked by internal nonresponsive zones, its topographic organization was grossly abnormal or reversed in about half of the exposed pups, the balance of neuronal inhibition to excitation for A1 neurons was disturbed, and the critical period plasticity that underlies normal postnatal auditory system development was significantly altered. These findings demonstrate that developmental exposure to this class of environmental contaminant alters cortical development. It is proposed that exposure to noncoplanar PCBs may contribute to common developmental disorders, especially in populations with heritable imbalances in neurotransmitter systems that regulate the ratio of inhibition and excitation in the brain. We conclude that the health implications associated with exposure to noncoplanar PCBs in human populations merit a more careful examination. PMID:17460041

  6. Auditory brain development in premature infants: the importance of early experience.

    PubMed

    McMahon, Erin; Wintermark, Pia; Lahav, Amir

    2012-04-01

    Preterm infants in the neonatal intensive care unit (NICU) often close their eyes in response to bright lights, but they cannot close their ears in response to loud sounds. The sudden transition from the womb to the overly noisy world of the NICU increases the vulnerability of these high-risk newborns. There is a growing concern that the excess noise typically experienced by NICU infants disrupts their growth and development, putting them at risk for hearing, language, and cognitive disabilities. Preterm neonates are especially sensitive to noise because their auditory system is at a critical period of neurodevelopment, and they are no longer shielded by maternal tissue. This paper discusses the developmental milestones of the auditory system and suggests ways to enhance the quality control and type of sounds delivered to NICU infants. We argue that positive auditory experience is essential for early brain maturation and may be a contributing factor for healthy neurodevelopment. Further research is needed to optimize the hospital environment for preterm newborns and to increase their potential to develop into healthy children. © 2012 New York Academy of Sciences.

  7. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  8. Respiratory sinus arrhythmia and auditory processing in autism: modifiable deficits of an integrated social engagement system?

    PubMed

    Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J

    2013-06-01

    The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    NASA Technical Reports Server (NTRS)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.

  10. EFFECTS OF DEVELOPMENTAL HYPOTHYROIDISM ON AUDITORY AND MOTOR FUNCTION IN THE RAT

    EPA Science Inventory

    Deafness is a common result of severe hypothyroidism during development in humans and laboratory animals, however little is known regarding the sensitivity of the auditory system to more moderate changes in thyroid hormone homeostasis. he present investigation compared the relati...

  11. Chronic low-level Pb exposure during development decreases the expression of the voltage-dependent anion channel in auditory neurons of the brainstem.

    PubMed

    Prins, John M; Brooks, Diane M; Thompson, Charles M; Lurie, Diana I

    2010-12-01

    Lead (Pb) exposure is a risk factor for neurological dysfunction. How Pb produces these behavioral deficits is unknown, but Pb exposure during development is associated with auditory temporal processing deficits in both humans and animals. Pb disrupts cellular energy metabolism and efficient energy production is crucial for auditory neurons to maintain high rates of synaptic activity. The voltage-dependent anion channel (VDAC) is involved in the regulation of mitochondrial physiology and is a critical component in controlling mitochondrial energy production. We have previously demonstrated that VDAC is an in vitro target for Pb, therefore, VDAC may represent a potential target for Pb in the auditory system. In order to determine whether Pb alters VDAC expression in central auditory neurons, CBA/CaJ mice (n=3-5/group) were exposed to 0.01mM, or 0.1mM Pb acetate during development via drinking water. At P21, immunohistochemistry reveals a significant decrease for VDAC in neurons of the Medial Nucleus of the Trapezoid Body. Western blot analysis confirms that Pb results in a significant decrease for VDAC. Decreases in VDAC expression could lead to an upregulation of other cellular energy producing systems as a compensatory mechanism, and a Pb-induced increase in brain type creatine kinase is observed in auditory regions of the brainstem. In addition, comparative proteomic analysis shows that several proteins of the glycolytic pathway, the phosphocreatine circuit, and oxidative phosphorylation are also upregulated in response to developmental Pb exposure. Thus, Pb-induced decreases in VDAC could have a significant effect on the function of auditory neurons. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Glycinergic Pathways of the Central Auditory System and Adjacent Reticular Formation of the Rat.

    NASA Astrophysics Data System (ADS)

    Hunter, Chyren

    The development of techniques to visualize and identify specific transmitters of neuronal circuits has stimulated work on the characterization of pathways in the rat central nervous system that utilize the inhibitory amino acid glycine as its neurotransmitter. Glycine is a major inhibitory transmitter in the spinal cord and brainstem of vertebrates where it satisfies the major criteria for neurotransmitter action. Some of these characteristics are: uneven distribution in brain, high affinity reuptake mechanisms, inhibitory neurophysiological actions on certain neuronal populations, uneven receptor distribution and the specific antagonism of its actions by the convulsant alkaloid strychnine. Behaviorally, antagonism of glycinergic neurotransmission in the medullary reticular formation is linked to the development of myoclonus and seizures which may be initiated by auditory as well as other stimuli. In the present study, decreases in the concentration of glycine as well as the density of glycine receptors in the medulla with aging were found and may be responsible for the lowered threshold for strychnine seizures observed in older rats. Neuroanatomical pathways in the central auditory system and medullary and pontine reticular formation (RF) were investigated using retrograde transport of tritiated glycine to identify glycinergic pathways; immunohistochemical techniques were used to corroborate the location of glycine neurons. Within the central auditory system, retrograde transport studies using tritiated glycine demonstrated an ipsilateral glycinergic pathway linking nuclei of the ascending auditory system. This pathway has its cell bodies in the medial nucleus of the trapezoid body (MNTB) and projects to the ventrocaudal division of the ventral nucleus of the lateral lemniscus (VLL). Collaterals of this glycinergic projection terminate in the ipsilateral lateral superior olive (LSO). Other glycinergic pathways found were afferent to the VLL and have their origin in the ventral and lateral nuclei of the trapezoid body (MVPO and LVPO). Bilateral projections from the nucleus reticularis pontis oralis (RPOo), to the VLL were also identified as glycinergic. This projection may link motor output systems to ascending auditory input, generating the auditory behavioral responses seen with glycine antagonism in animal models of myoclonus and seizure.

  13. Early experience shapes vocal neural coding and perception in songbirds

    PubMed Central

    Woolley, Sarah M. N.

    2012-01-01

    Songbirds, like humans, are highly accomplished vocal learners. The many parallels between speech and birdsong and conserved features of mammalian and avian auditory systems have led to the emergence of the songbird as a model system for studying the perceptual mechanisms of vocal communication. Laboratory research on songbirds allows the careful control of early life experience and high-resolution analysis of brain function during vocal learning, production and perception. Here, I review what songbird studies have revealed about the role of early experience in the development of vocal behavior, auditory perception and the processing of learned vocalizations by auditory neurons. The findings of these studies suggest general principles for how exposure to vocalizations during development and into adulthood influences the perception of learned vocal signals. PMID:22711657

  14. Mecanismos de plasticidad (funcional y dependiente de actividad) en el cerebro auditivo adulto y en desarrollo

    PubMed Central

    Izquierdo, M.A.; Oliver, D.L.; Malmierca, M.S.

    2010-01-01

    Summary Introduction and development Sensory systems show a topographic representation of the sensory epithelium in the central nervous system. In the auditory system this representation originates tonotopic maps. For the last four decades these changes in tonotopic maps have been widely studied either after peripheral mechanical lesions or by exposing animals to an augmented acoustic environment. These sensory manipulations induce plastic reorganizations in the tonotopic map of the auditory cortex. By contrast, acoustic trauma does not seem to induce functional plasticity at subcortical nuclei. Mechanisms that generate these changes differ in their molecular basis and temporal course and we can distinguish two different mechanisms: those involving an active reorganization process, and those that show a simple reflection of the loss of peripheral afferences. Only the former involve a genuine process of plastic reorganization. Neuronal plasticity is critical for the normal development and function of the adult auditory system, as well as for the rehabilitation needed after the implantation of auditory prostheses. However, development of plasticity can also generate abnormal sensation like tinnitus. Recently, a new concept in neurobiology so-called ‘neuronal stability’ has emerged and its implications and conceptual basis could help to improve the treatments of hearing loss. Conclusion A combination of neuronal plasticity and stability is suggested as a powerful and promising future strategy in the design of new treatments of hearing loss. PMID:19340783

  15. Transcranial magnetic stimulation for the treatment of tinnitus: a new coil positioning method and first results.

    PubMed

    Langguth, Berthold; Zowe, Marc; Landgrebe, Michael; Sand, Philipp; Kleinjung, Tobias; Binder, Harald; Hajak, Göran; Eichhammer, Peter

    2006-01-01

    Auditory phantom perceptions are associated with hyperactivity of the central auditory system. Neuronavigation guided repetitive transcranial magnetic stimulation (rTMS) of the area of increased activity was demonstrated to reduce tinnitus perception. The study aimed at developing an easy applicable standard procedure for transcranial magnetic stimulation of the primary auditory cortex and to investigate this coil positioning strategy for the treatment of chronic tinnitus in clinical practice. The left gyrus of Heschl was targeted in 25 healthy subjects using a frameless stereotactical system. Based on individual scalp coordinates of the coil, a positioning strategy with reference to the 10--20-EEG system was developed. Using this coil positioning approach we started an open treatment trial. 28 patients with chronic tinnitus received 10 sessions of rTMS (intensity 110% of motor threshold, 1 Hz, 2000 Stimuli/day). Being within a range of about 20 mm diameter, the scalp coordinates for stimulating the primary auditory cortex allowed to determine a standard procedure for coil positioning. Clinical validation of this coil positioning method resulted in a significant improvement of tinnitus complaints (p<0.001). The newly developed coil positioning strategy may have the potential to offer a more easy-to-use stimulation approach for treating chronic tinnitus as compared with highly sophisticated, imaging guided treatment methods.

  16. Auditory false perception in schizophrenia: Development and validation of auditory signal detection task.

    PubMed

    Chhabra, Harleen; Sowmya, Selvaraj; Sreeraj, Vanteemar S; Kalmady, Sunil V; Shivakumar, Venkataram; Amaresha, Anekal C; Narayanaswamy, Janardhanan C; Venkatasubramanian, Ganesan

    2016-12-01

    Auditory hallucinations constitute an important symptom component in 70-80% of schizophrenia patients. These hallucinations are proposed to occur due to an imbalance between perceptual expectation and external input, resulting in attachment of meaning to abstract noises; signal detection theory has been proposed to explain these phenomena. In this study, we describe the development of an auditory signal detection task using a carefully chosen set of English words that could be tested successfully in schizophrenia patients coming from varying linguistic, cultural and social backgrounds. Schizophrenia patients with significant auditory hallucinations (N=15) and healthy controls (N=15) performed the auditory signal detection task wherein they were instructed to differentiate between a 5-s burst of plain white noise and voiced-noise. The analysis showed that false alarms (p=0.02), discriminability index (p=0.001) and decision bias (p=0.004) were significantly different between the two groups. There was a significant negative correlation between false alarm rate and decision bias. These findings extend further support for impaired perceptual expectation system in schizophrenia patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. The cytokine macrophage migration inhibitory factor (MIF) acts as a neurotrophin in the developing inner ear of the zebrafish, Danio rerio

    PubMed Central

    Shen, Yu-chi; Thompson, Deborah L.; Kuah, Meng-Kiat; Wong, Kah-Loon; Wu, Karen L.; Linn, Stephanie A.; Jewett, Ethan M.; Shu-Chien, Alexander Chong; Barald, Kate F.

    2012-01-01

    Macrophage migration inhibitory factor (MIF) plays versatile roles in the immune system. MIF is also widely expressed during embryonic development, particularly in the nervous system, although its roles in neural development are only beginning to be understood. Evidence from frogs, mice and zebrafish suggests that MIF has a major role as a neurotrophin in the early development of sensory systems, including the auditory system. Here we show that the zebrafish mif pathway is required for both sensory hair cell (HC) and sensory neuronal cell survival in the ear, for HC differentiation, semicircular canal formation, statoacoustic ganglion (SAG) development, and lateral line HC differentiation. This is consistent with our findings that MIF is expressed in the developing mammalian and avian auditory systems and promotes mouse and chick SAG neurite outgrowth and neuronal survival, demonstrating key instructional roles for MIF in vertebrate otic development. PMID:22210003

  18. Auditory system dysfunction in Alzheimer disease and its prodromal states: A review.

    PubMed

    Swords, Gabriel M; Nguyen, Lydia T; Mudar, Raksha A; Llano, Daniel A

    2018-07-01

    Recent findings suggest that both peripheral and central auditory system dysfunction occur in the prodromal stages of Alzheimer Disease (AD), and therefore may represent early indicators of the disease. In addition, loss of auditory function itself leads to communication difficulties, social isolation and poor quality of life for both patients with AD and their caregivers. Developing a greater understanding of auditory dysfunction in early AD may shed light on the mechanisms of disease progression and carry diagnostic and therapeutic importance. Herein, we review the literature on hearing abilities in AD and its prodromal stages investigated through methods such as pure-tone audiometry, dichotic listening tasks, and evoked response potentials. We propose that screening for peripheral and central auditory dysfunction in at-risk populations is a low-cost and effective means to identify early AD pathology and provides an entry point for therapeutic interventions that enhance the quality of life of AD patients. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Atoh1-lineal neurons are required for hearing and for the survival of neurons in the spiral ganglion and brainstem accessory auditory nuclei

    PubMed Central

    Maricich, Stephen M.; Xia, Anping; Mathes, Erin L.; Wang, Vincent Y.; Oghalai, John S.; Fritzsch, Bernd; Zoghbi, Huda Y.

    2009-01-01

    Atoh1 is a basic helix-loop-helix transcription factor necessary for the specification of inner ear hair cells and central auditory system neurons derived from the rhombic lip. We used the Cre-loxP system and two Cre-driver lines (Egr2Cre and Hoxb1Cre) to delete Atoh1 from different regions of the cochlear nucleus (CN) and accessory auditory nuclei (AAN). Adult Atoh1-conditional knockout mice (Atoh1CKO) are behaviorally deaf, have diminished auditory brainstem evoked responses and disrupted CN and AAN morphology and connectivity. In addition, Egr2; Atoh1CKO mice lose spiral ganglion neurons in the cochlea and AAN neurons during the first 3 days of life, revealing a novel critical period in the development of these neurons. These new mouse models of predominantly central deafness illuminate the importance of the CN for support of a subset of peripheral and central auditory neurons. PMID:19741118

  20. Pitch sensation involves stochastic resonance

    PubMed Central

    Martignoli, Stefan; Gomez, Florian; Stoop, Ruedi

    2013-01-01

    Pitch is a complex hearing phenomenon that results from elicited and self-generated cochlear vibrations. Read-off vibrational information is relayed higher up the auditory pathway, where it is then condensed into pitch sensation. How this can adequately be described in terms of physics has largely remained an open question. We have developed a peripheral hearing system (in hardware and software) that reproduces with great accuracy all salient pitch features known from biophysical and psychoacoustic experiments. At the level of the auditory nerve, the system exploits stochastic resonance to achieve this performance, which may explain the large amount of noise observed in the working auditory nerve. PMID:24045830

  1. Functional and structural changes throughout the auditory system following congenital and early-onset deafness: implications for hearing restoration

    PubMed Central

    Butler, Blake E.; Lomber, Stephen G.

    2013-01-01

    The absence of auditory input, particularly during development, causes widespread changes in the structure and function of the auditory system, extending from peripheral structures into auditory cortex. In humans, the consequences of these changes are far-reaching and often include detriments to language acquisition, and associated psychosocial issues. Much of what is currently known about the nature of deafness-related changes to auditory structures comes from studies of congenitally deaf or early-deafened animal models. Fortunately, the mammalian auditory system shows a high degree of preservation among species, allowing for generalization from these models to the human auditory system. This review begins with a comparison of common methods used to obtain deaf animal models, highlighting the specific advantages and anatomical consequences of each. Some consideration is also given to the effectiveness of methods used to measure hearing loss during and following deafening procedures. The structural and functional consequences of congenital and early-onset deafness have been examined across a variety of mammals. This review attempts to summarize these changes, which often involve alteration of hair cells and supporting cells in the cochleae, and anatomical and physiological changes that extend through subcortical structures and into cortex. The nature of these changes is discussed, and the impacts to neural processing are addressed. Finally, long-term changes in cortical structures are discussed, with a focus on the presence or absence of cross-modal plasticity. In addition to being of interest to our understanding of multisensory processing, these changes also have important implications for the use of assistive devices such as cochlear implants. PMID:24324409

  2. Threshold and Beyond: Modeling The Intensity Dependence of Auditory Responses

    PubMed Central

    2007-01-01

    In many studies of auditory-evoked responses to low-intensity sounds, the response amplitude appears to increase roughly linearly with the sound level in decibels (dB), corresponding to a logarithmic intensity dependence. But the auditory system is assumed to be linear in the low-intensity limit. The goal of this study was to resolve the seeming contradiction. Based on assumptions about the rate-intensity functions of single auditory-nerve fibers and the pattern of cochlear excitation caused by a tone, a model for the gross response of the population of auditory nerve fibers was developed. In accordance with signal detection theory, the model denies the existence of a threshold. This implies that regarding the detection of a significant stimulus-related effect, a reduction in sound intensity can always be compensated for by increasing the measurement time, at least in theory. The model suggests that the gross response is proportional to intensity when the latter is low (range I), and a linear function of sound level at higher intensities (range III). For intensities in between, it is concluded that noisy experimental data may provide seemingly irrefutable evidence of a linear dependence on sound pressure (range II). In view of the small response amplitudes that are to be expected for intensity range I, direct observation of the predicted proportionality with intensity will generally be a challenging task for an experimenter. Although the model was developed for the auditory nerve, the basic conclusions are probably valid for higher levels of the auditory system, too, and might help to improve models for loudness at threshold. PMID:18008105

  3. The Contribution of Brainstem and Cerebellar Pathways to Auditory Recognition

    PubMed Central

    McLachlan, Neil M.; Wilson, Sarah J.

    2017-01-01

    The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans. This review expands our understanding of auditory processing by incorporating cerebellar pathways into the anatomy and functions of the human auditory system. We reason that plasticity in the cerebellar pathways underpins implicit learning of spectrotemporal information necessary for sound and speech recognition. Once learnt, this information automatically recognizes incoming auditory signals and predicts likely subsequent information based on previous experience. Since sound recognition processes involving the brainstem and cerebellum initiate early in auditory processing, learnt information stored in cerebellar memory templates could then support a range of auditory processing functions such as streaming, habituation, the integration of auditory feature information such as pitch, and the recognition of vocal communications. PMID:28373850

  4. Grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents.

    PubMed

    Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang

    2015-01-01

    Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.

  5. Computerized classification of auditory trauma: Results of an investigation on screening employees exposed to noise

    NASA Technical Reports Server (NTRS)

    Klockhoff, I.

    1977-01-01

    An automatic, computerized method was developed to classify results from a screening of employees exposed to noise, resulting in a fast and effective method of identifying and taking measures against auditory trauma. This technique also satisfies the urgent need for quick discovery of cases which deserve compensation in accordance with the Law on Industrial Accident Insurance. Unfortunately, use of this method increases the burden on the already overloaded investigatory resources of the auditory health care system.

  6. Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus.

    PubMed

    Ingham, N J; Thornton, S K; McCrossan, D; Withington, D J

    1998-12-01

    Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus. J. Neurophysiol. 80: 2941-2953, 1998. The mammalian superior colliculus (SC) is a complex area of the midbrain in terms of anatomy, physiology, and neurochemistry. The SC bears representations of the major sensory modalites integrated with a motor output system. It is implicated with saccade generation, in behavioral responses to novel sensory stimuli and receives innervation from diverse regions of the brain using many neurotransmitter classes. Ethylene-vinyl acetate copolymer (Elvax-40W polymer) was used here to deliver chronically neurotransmitter receptor antagonists to the SC of the guinea pig to investigate the potential role played by the major neurotransmitter systems in the collicular representation of auditory space. Slices of polymer containing different drugs were implanted onto the SC of guinea pigs before the development of the SC azimuthal auditory space map, at approximately 20 days after birth (DAB). A further group of animals was exposed to aminophosphonopentanoic acid (AP5) at approximately 250 DAB. Azimuthal spatial tuning properties of deep layer multiunits of anesthetized guinea pigs were examined approximately 20 days after implantation of the Elvax polymer. Broadband noise bursts were presented to the animals under anechoic, free-field conditions. Neuronal responses were used to construct polar plots representative of the auditory spatial multiunit receptive fields (MURFs). Animals exposed to control polymer could develop a map of auditory space in the SC comparable with that seen in unimplanted normal animals. Exposure of the SC of young animals to AP5, 6-cyano-7-nitroquinoxaline-2,3-dione, or atropine, resulted in a reduction in the proportion of spatially tuned responses with an increase in the proportion of broadly tuned responses and a degradation in topographic order. Thus N-methyl--aspartate (NMDA) and non-NMDA glutamate receptors and muscarinic acetylcholine receptors appear to play vital roles in the development of the SC auditory space map. A group of animals exposed to AP5 beginning at approximately 250 DAB produced results very similar to those obtained in the young group exposed to AP5. Thus NMDA glutamate receptors also seem to be involved in the maintenance of the SC representation of auditory space in the adult guinea pig. Exposure of the SC of young guinea pigs to gamma-aminobutyric acid (GABA) receptor blocking agents produced some but not total disruption of the spatial tuning of auditory MURFs. Receptive fields were large compared with controls, but a significant degree of topographical organization was maintained. GABA receptors may play a role in the development of fine tuning and sharpening of auditory spatial responses in the SC but not necessarily in the generation of topographical order of the these responses.

  7. Metabotropic glutamate receptors in auditory processing

    PubMed Central

    Lu, Yong

    2014-01-01

    As the major excitatory neurotransmitter used in the vertebrate brain, glutamate activates ionotropic and metabotropic glutamate receptors (mGluRs), which mediate fast and slow neuronal actions, respectively. Important modulatory roles of mGluRs have been shown in many brain areas, and drugs targeting mGluRs have been developed for treatment of brain disorders. Here, I review the studies on mGluRs in the auditory system. Anatomical expression of mGluRs in the cochlear nucleus has been well characterized, while data for other auditory nuclei await more systematic investigations at both the light and electron microscopy levels. The physiology of mGluRs has been extensively studied using in vitro brain slice preparations, with a focus on the lower auditory brainstem in both mammals and birds. These in vitro physiological studies have revealed that mGluRs participate in neurotransmission, regulate ionic homeostasis, induce synaptic plasticity, and maintain the balance between excitation and inhibition in a variety of auditory structures. However, very few in vivo physiological studies on mGluRs in auditory processing have been undertaken at the systems level. Many questions regarding the essential roles of mGluRs in auditory processing still remain unanswered and more rigorous basic research is warranted. PMID:24909898

  8. Tuning Shifts of the Auditory System By Corticocortical and Corticofugal Projections and Conditioning

    PubMed Central

    Suga, Nobuo

    2011-01-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation -- comparable to repetitive tonal stimulation -- of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a “differential” gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning. PMID:22155273

  9. A comparative analysis of auditory perception in humans and songbirds: a modular approach.

    PubMed

    Weisman, Ronald; Hoeschele, Marisa; Sturdy, Christopher B

    2014-05-01

    We propose that a relatively small number of perceptual skills underlie human perception of music and speech. Humans and songbirds share a number of features in the development of their auditory communication systems. These similarities invite comparisons between species in their auditory perceptual skills. Here, we summarized our experimental comparisons between humans (and other mammals) and songbirds (and other birds) in their use of pitch height and pitch chroma perception and discuss similarities and differences in other auditory perceptual abilities of these species. Specifically, we introduced a functional modular view, using pitch chroma and pitch height perception as examples, as a theoretical framework for the comparative study of auditory perception and perhaps all of the study of comparative cognition. We also contrasted phylogeny and adaptation as causal mechanisms in comparative cognition using examples from auditory perception. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. The onset of visual experience gates auditory cortex critical periods

    PubMed Central

    Mowery, Todd M.; Kotak, Vibhakar C.; Sanes, Dan H.

    2016-01-01

    Sensory systems influence one another during development and deprivation can lead to cross-modal plasticity. As auditory function begins before vision, we investigate the effect of manipulating visual experience during auditory cortex critical periods (CPs) by assessing the influence of early, normal and delayed eyelid opening on hearing loss-induced changes to membrane and inhibitory synaptic properties. Early eyelid opening closes the auditory cortex CPs precociously and dark rearing prevents this effect. In contrast, delayed eyelid opening extends the auditory cortex CPs by several additional days. The CP for recovery from hearing loss is also closed prematurely by early eyelid opening and extended by delayed eyelid opening. Furthermore, when coupled with transient hearing loss that animals normally fully recover from, very early visual experience leads to inhibitory deficits that persist into adulthood. Finally, we demonstrate a functional projection from the visual to auditory cortex that could mediate these effects. PMID:26786281

  11. Auditory Alterations in Children Infected by Human Immunodeficiency Virus Verified Through Auditory Processing Test

    PubMed Central

    Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima

    2016-01-01

    Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213

  12. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    PubMed

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  13. Sound envelope processing in the developing human brain: A MEG study.

    PubMed

    Tang, Huizhen; Brock, Jon; Johnson, Blake W

    2016-02-01

    This study investigated auditory cortical processing of linguistically-relevant temporal modulations in the developing brains of young children. Auditory envelope following responses to white noise amplitude modulated at rates of 1-80 Hz in healthy children (aged 3-5 years) and adults were recorded using a paediatric magnetoencephalography (MEG) system and a conventional MEG system, respectively. For children, there were envelope following responses to slow modulations but no significant responses to rates higher than about 25 Hz, whereas adults showed significant envelope following responses to almost the entire range of stimulus rates. Our results show that the auditory cortex of preschool-aged children has a sharply limited capacity to process rapid amplitude modulations in sounds, as compared to the auditory cortex of adults. These neurophysiological results are consistent with previous psychophysical evidence for a protracted maturational time course for auditory temporal processing. The findings are also in good agreement with current linguistic theories that posit a perceptual bias for low frequency temporal information in speech during language acquisition. These insights also have clinical relevance for our understanding of language disorders that are associated with difficulties in processing temporal information in speech. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Grammatical Language Impairment and the Specificity of Cognitive Domains: Relations between Auditory and Language Abilities

    ERIC Educational Resources Information Center

    van der Lely, Heather K. J.; Rosen, Stuart; Adlard, Alan

    2004-01-01

    Grammatical-specific language impairment (G-SLI) in children, arguably, provides evidence for the existence of a specialised grammatical sub-system in the brain, necessary for normal language development. Some researchers challenge this, claiming that domain-general, low-level auditory deficits, particular to rapid processing, cause phonological…

  15. Auditory Habituation in the Fetus and Neonate: An fMEG Study

    ERIC Educational Resources Information Center

    Muenssinger, Jana; Matuz, Tamara; Schleger, Franziska; Kiefer-Schmidt, Isabelle; Goelz, Rangmar; Wacker-Gussmann, Annette; Birbaumer, Niels; Preissl, Hubert

    2013-01-01

    Habituation--the most basic form of learning--is used to evaluate central nervous system (CNS) maturation and to detect abnormalities in fetal brain development. In the current study, habituation, stimulus specificity and dishabituation of auditory evoked responses were measured in fetuses and newborns using fetal magnetoencephalography (fMEG). An…

  16. Elevated depressive symptoms enhance reflexive but not reflective auditory category learning.

    PubMed

    Maddox, W Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G

    2014-09-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Elevated Depressive Symptoms Enhance Reflexive but not Reflective Auditory Category Learning

    PubMed Central

    Maddox, W. Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G.

    2014-01-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. PMID:25041936

  18. A Brain for Speech. Evolutionary Continuity in Primate and Human Auditory-Vocal Processing

    PubMed Central

    Aboitiz, Francisco

    2018-01-01

    In this review article, I propose a continuous evolution from the auditory-vocal apparatus and its mechanisms of neural control in non-human primates, to the peripheral organs and the neural control of human speech. Although there is an overall conservatism both in peripheral systems and in central neural circuits, a few changes were critical for the expansion of vocal plasticity and the elaboration of proto-speech in early humans. Two of the most relevant changes were the acquisition of direct cortical control of the vocal fold musculature and the consolidation of an auditory-vocal articulatory circuit, encompassing auditory areas in the temporoparietal junction and prefrontal and motor areas in the frontal cortex. This articulatory loop, also referred to as the phonological loop, enhanced vocal working memory capacity, enabling early humans to learn increasingly complex utterances. The auditory-vocal circuit became progressively coupled to multimodal systems conveying information about objects and events, which gradually led to the acquisition of modern speech. Gestural communication accompanies the development of vocal communication since very early in human evolution, and although both systems co-evolved tightly in the beginning, at some point speech became the main channel of communication. PMID:29636657

  19. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  20. Analytical and numerical modeling of the hearing system: Advances towards the assessment of hearing damage.

    PubMed

    De Paolis, Annalisa; Bikson, Marom; Nelson, Jeremy T; de Ru, J Alexander; Packer, Mark; Cardoso, Luis

    2017-06-01

    Hearing is an extremely complex phenomenon, involving a large number of interrelated variables that are difficult to measure in vivo. In order to investigate such process under simplified and well-controlled conditions, models of sound transmission have been developed through many decades of research. The value of modeling the hearing system is not only to explain the normal function of the hearing system and account for experimental and clinical observations, but to simulate a variety of pathological conditions that lead to hearing damage and hearing loss, as well as for development of auditory implants, effective ear protections and auditory hazard countermeasures. In this paper, we provide a review of the strategies used to model the auditory function of the external, middle, inner ear, and the micromechanics of the organ of Corti, along with some of the key results obtained from such modeling efforts. Recent analytical and numerical approaches have incorporated the nonlinear behavior of some parameters and structures into their models. Few models of the integrated hearing system exist; in particular, we describe the evolution of the Auditory Hazard Assessment Algorithm for Human (AHAAH) model, used for prediction of hearing damage due to high intensity sound pressure. Unlike the AHAAH model, 3D finite element models of the entire hearing system are not able yet to predict auditory risk and threshold shifts. It is expected that both AHAAH and FE models will evolve towards a more accurate assessment of threshold shifts and hearing loss under a variety of stimuli conditions and pathologies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Feasibility of and Design Parameters for a Computer-Based Attitudinal Research Information System

    DTIC Science & Technology

    1975-08-01

    Auditory Displays Auditory Evoked Potentials Auditory Feedback Auditory Hallucinations Auditory Localization Auditory Maski ng Auditory Neurons...surprising to hear these prob- lems e:qpressed once again and in the same old refrain. The Navy attitude surveyors were frustrated when they...Audiolcgy Audiometers Aud iometry Audiotapes Audiovisual Communications Media Audiovisual Instruction Auditory Cortex Auditory

  2. Prenatal Nicotine Exposure Disrupts Infant Neural Markers of Orienting.

    PubMed

    King, Erin; Campbell, Alana; Belger, Aysenil; Grewen, Karen

    2018-06-07

    Prenatal nicotine exposure (PNE) from maternal cigarette smoking is linked to developmental deficits, including impaired auditory processing, language, generalized intelligence, attention, and sleep. Fetal brain undergoes massive growth, organization, and connectivity during gestation, making it particularly vulnerable to neurotoxic insult. Nicotine binds to nicotinic acetylcholine receptors, which are extensively involved in growth, connectivity, and function of developing neural circuitry and neurotransmitter systems. Thus, PNE may have long-term impact on neurobehavioral development. The purpose of this study was to compare the auditory K-complex, an event-related potential reflective of auditory gating, sleep preservation and memory consolidation during sleep, in infants with and without PNE and to relate these neural correlates to neurobehavioral development. We compared brain responses to an auditory paired-click paradigm in 3- to 5-month-old infants during Stage 2 sleep, when the K-complex is best observed. We measured component amplitude and delta activity during the K-complex. Infants with PNE demonstrated significantly smaller amplitude of the N550 component and reduced delta-band power within elicited K-complexes compared to nonexposed infants and also were less likely to orient with a head turn to a novel auditory stimulus (bell ring) when awake. PNE may impair auditory sensory gating, which may contribute to disrupted sleep and to reduced auditory discrimination and learning, attention re-orienting, and/or arousal during wakefulness reported in other studies. Links between PNE and reduced K-complex amplitude and delta power may represent altered cholinergic and GABAergic synaptic programming and possibly reflect early neural bases for PNE-linked disruptions in sleep quality and auditory processing. These may pose significant disadvantage for language acquisition, attention, and social interaction necessary for academic and social success.

  3. ERP evaluation of auditory sensory memory systems in adults with intellectual disability.

    PubMed

    Ikeda, Kazunari; Hashimoto, Souichi; Hayashi, Akiko; Kanno, Atsushi

    2009-01-01

    Auditory sensory memory stage can be functionally divided into two subsystems; transient-detector system and permanent feature-detector system (Naatanen, 1992). We assessed these systems in persons with intellectual disability by measuring event-related potentials (ERPs) N1 and mismatch negativity (MMN), which reflect the two auditory subsystems, respectively. Added to these, P3a (an ERP reflecting stage after sensory memory) was evaluated. Either synthesized vowels or simple tones were delivered during a passive oddball paradigm to adults with and without intellectual disability. ERPs were recorded from midline scalp sites (Fz, Cz, and Pz). Relative to control group, participants with the disability exhibited greater N1 latency and less MMN amplitude. The results for N1 amplitude and MMN latency were basically comparable between both groups. IQ scores in participants with the disability revealed no significant relation with N1 and MMN measures, whereas the IQ scores tended to increase significantly as P3a latency reduced. These outcomes suggest that persons with intellectual disability might own discrete malfunctions for the two detector systems in auditory sensory-memory stage. Moreover, the processes following sensory memory might be partly related to a determinant of mental development.

  4. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2013-01-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583

  5. Neuromonitoring of cochlea and auditory nerve with multiple extracted parameters during induced hypoxia and nerve manipulation

    NASA Astrophysics Data System (ADS)

    Bohórquez, Jorge; Özdamar, Özcan; Morawski, Krzysztof; Telischi, Fred F.; Delgado, Rafael E.; Yavuz, Erdem

    2005-06-01

    A system capable of comprehensive and detailed monitoring of the cochlea and the auditory nerve during intraoperative surgery was developed. The cochlear blood flow (CBF) and the electrocochleogram (ECochGm) were recorded at the round window (RW) niche using a specially designed otic probe. The ECochGm was further processed to obtain cochlear microphonics (CM) and compound action potentials (CAP).The amplitude and phase of the CM were used to quantify the activity of outer hair cells (OHC); CAP amplitude and latency were used to describe the auditory nerve and the synaptic activity of the inner hair cells (IHC). In addition, concurrent monitoring with a second electrophysiological channel was achieved by recording compound nerve action potential (CNAP) obtained directly from the auditory nerve. Stimulation paradigms, instrumentation and signal processing methods were developed to extract and differentiate the activity of the OHC and the IHC in response to three different frequencies. Narrow band acoustical stimuli elicited CM signals indicating mainly nonlinear operation of the mechano-electrical transduction of the OHCs. Special envelope detectors were developed and applied to the ECochGm to extract the CM fundamental component and its harmonics in real time. The system was extensively validated in experimental animal surgeries by performing nerve compressions and manipulations.

  6. Attending to auditory memory.

    PubMed

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Amblyaudia: Review of Pathophysiology, Clinical Presentation, and Treatment of a New Diagnosis.

    PubMed

    Kaplan, Alyson B; Kozin, Elliott D; Remenschneider, Aaron; Eftekhari, Kian; Jung, David H; Polley, Daniel B; Lee, Daniel J

    2016-02-01

    Similar to amblyopia in the visual system, "amblyaudia" is a term used to describe persistent hearing difficulty experienced by individuals with a history of asymmetric hearing loss (AHL) during a critical window of brain development. Few clinical reports have described this phenomenon and its consequent effects on central auditory processing. We aim to (1) define the concept of amblyaudia and (2) review contemporary research on its pathophysiology and emerging clinical relevance. PubMed, Embase, and Cochrane databases. A systematic literature search was performed with combinations of search terms: "amblyaudia," "conductive hearing loss," "sensorineural hearing loss," "asymmetric," "pediatric," "auditory deprivation," and "auditory development." Relevant articles were considered for inclusion, including basic and clinical studies, case series, and major reviews. During critical periods of infant brain development, imbalanced auditory input associated with AHL may lead to abnormalities in binaural processing. Patients with amblyaudia can demonstrate long-term deficits in auditory perception even with correction or resolution of AHL. The greatest impact is in sound localization and hearing in noisy environments, both of which rely on bilateral auditory cues. Diagnosis and quantification of amblyaudia remain controversial and poorly defined. Prevention of amblyaudia may be possible through early identification and timely management of reversible causes of AHL. Otolaryngologists, audiologists, and pediatricians should be aware of emerging data supporting amblyaudia as a diagnostic entity and be cognizant of the potential for lasting consequences of AHL. Prevention of long-term auditory deficits may be possible through rapid identification and correction. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  8. Precision rodent whisker stimulator with integrated servo-locked control and displacement measurement.

    PubMed

    Walker, Jennifer L; Monjaraz-Fuentes, Fernanda; Pedrow, Christi R; Rector, David M

    2011-03-15

    We developed a high speed voice coil based whisker stimulator that delivers precise deflections of a single whisker or group of whiskers in a repeatable manner. The device is miniature, quiet, and inexpensive to build. Multiple stimulators fit together for independent stimulation of four or more whiskers. The system can be used with animals under anesthesia as well as awake animals with head-restraint, and does not require trimming the whiskers. The system can deliver 1-2 mm deflections in 2 ms resulting in velocities up to 900 mm/s to attain a wide range of evoked responses. Since auditory artifacts can influence behavioral studies using whisker stimulation, we tested potential effects of auditory noise by recording somatosensory evoked potentials (SEP) with varying auditory click levels, and with/without 80 dBa background white noise. We found that auditory clicks as low as 40 dBa significantly influence the SEP. With background white noise, auditory clicks as low as 50 dBa were still detected in components of the SEP. For behavioral studies where animals must learn to respond to whisker stimulation, these sounds must be minimized. Together, the stimulator and data system can be used for psychometric vigilance tasks, mapping of the barrel cortex and other electrophysiological paradigms. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Recruitment of the auditory cortex in congenitally deaf cats by long-term cochlear electrostimulation.

    PubMed

    Klinke, R; Kral, A; Heid, S; Tillein, J; Hartmann, R

    1999-09-10

    In congenitally deaf cats, the central auditory system is deprived of acoustic input because of degeneration of the organ of Corti before the onset of hearing. Primary auditory afferents survive and can be stimulated electrically. By means of an intracochlear implant and an accompanying sound processor, congenitally deaf kittens were exposed to sounds and conditioned to respond to tones. After months of exposure to meaningful stimuli, the cortical activity in chronically implanted cats produced field potentials of higher amplitudes, expanded in area, developed long latency responses indicative of intracortical information processing, and showed more synaptic efficacy than in naïve, unstimulated deaf cats. The activity established by auditory experience resembles activity in hearing animals.

  10. Intertrial auditory neural stability supports beat synchronization in preschoolers

    PubMed Central

    Carr, Kali Woodruff; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina

    2016-01-01

    The ability to synchronize motor movements along with an auditory beat places stringent demands on the temporal processing and sensorimotor integration capabilities of the nervous system. Links between millisecond-level precision of auditory processing and the consistency of sensorimotor beat synchronization implicate fine auditory neural timing as a mechanism for forming stable internal representations of, and behavioral reactions to, sound. Here, for the first time, we demonstrate a systematic relationship between consistency of beat synchronization and trial-by-trial stability of subcortical speech processing in preschoolers (ages 3 and 4 years old). We conclude that beat synchronization might provide a useful window into millisecond-level neural precision for encoding sound in early childhood, when speech processing is especially important for language acquisition and development. PMID:26760457

  11. The what, where and how of auditory-object perception.

    PubMed

    Bizley, Jennifer K; Cohen, Yale E

    2013-10-01

    The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.

  12. The what, where and how of auditory-object perception

    PubMed Central

    Bizley, Jennifer K.; Cohen, Yale E.

    2014-01-01

    The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177

  13. Neural coding strategies in auditory cortex.

    PubMed

    Wang, Xiaoqin

    2007-07-01

    In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.

  14. Prospects for Replacement of Auditory Neurons by Stem Cells

    PubMed Central

    Shi, Fuxin; Edge, Albert S.B.

    2013-01-01

    Sensorineural hearing loss is caused by degeneration of hair cells or auditory neurons. Spiral ganglion cells, the primary afferent neurons of the auditory system, are patterned during development and send out projections to hair cells and to the brainstem under the control of largely unknown guidance molecules. The neurons do not regenerate after loss and even damage to their projections tends to be permanent. The genesis of spiral ganglion neurons and their synapses forms a basis for regenerative approaches. In this review we critically present the current experimental findings on auditory neuron replacement. We discuss the latest advances with a focus on (a) exogenous stem cell transplantation into the cochlea for neural replacement, (b) expression of local guidance signals in the cochlea after loss of auditory neurons, (c) the possibility of neural replacement from an endogenous cell source, and (d) functional changes from cell engraftment. PMID:23370457

  15. Functional mapping of the primate auditory system.

    PubMed

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  16. Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt.

    PubMed

    Hickok, Gregory; Buchsbaum, Bradley; Humphries, Colin; Muftuler, Tugan

    2003-07-01

    The concept of auditory-motor interaction pervades speech science research, yet the cortical systems supporting this interface have not been elucidated. Drawing on experimental designs used in recent work in sensory-motor integration in the cortical visual system, we used fMRI in an effort to identify human auditory regions with both sensory and motor response properties, analogous to single-unit responses in known visuomotor integration areas. The sensory phase of the task involved listening to speech (nonsense sentences) or music (novel piano melodies); the "motor" phase of the task involved covert rehearsal/humming of the auditory stimuli. A small set of areas in the superior temporal and temporal-parietal cortex responded both during the listening phase and the rehearsal/humming phase. A left lateralized region in the posterior Sylvian fissure at the parietal-temporal boundary, area Spt, showed particularly robust responses to both phases of the task. Frontal areas also showed combined auditory + rehearsal responsivity consistent with the claim that the posterior activations are part of a larger auditory-motor integration circuit. We hypothesize that this circuit plays an important role in speech development as part of the network that enables acoustic-phonetic input to guide the acquisition of language-specific articulatory-phonetic gestures; this circuit may play a role in analogous musical abilities. In the adult, this system continues to support aspects of speech production, and, we suggest, supports verbal working memory.

  17. Audio-vocal system regulation in children with autism spectrum disorders.

    PubMed

    Russo, Nicole; Larson, Charles; Kraus, Nina

    2008-06-01

    Do children with autism spectrum disorders (ASD) respond similarly to perturbations in auditory feedback as typically developing (TD) children? Presentation of pitch-shifted voice auditory feedback to vocalizing participants reveals a close coupling between the processing of auditory feedback and vocal motor control. This paradigm was used to test the hypothesis that abnormalities in the audio-vocal system would negatively impact ASD compensatory responses to perturbed auditory feedback. Voice fundamental frequency (F(0)) was measured while children produced an /a/ sound into a microphone. The voice signal was fed back to the subjects in real time through headphones. During production, the feedback was pitch shifted (-100 cents, 200 ms) at random intervals for 80 trials. Averaged voice F(0) responses to pitch-shifted stimuli were calculated and correlated with both mental and language abilities as tested via standardized tests. A subset of children with ASD produced larger responses to perturbed auditory feedback than TD children, while the other children with ASD produced significantly lower response magnitudes. Furthermore, robust relationships between language ability, response magnitude and time of peak magnitude were identified. Because auditory feedback helps to stabilize voice F(0) (a major acoustic cue of prosody) and individuals with ASD have problems with prosody, this study identified potential mechanisms of dysfunction in the audio-vocal system for voice pitch regulation in some children with ASD. Objectively quantifying this deficit may inform both the assessment of a subgroup of ASD children with prosody deficits, as well as remediation strategies that incorporate pitch training.

  18. Auditory models for speech analysis

    NASA Astrophysics Data System (ADS)

    Maybury, Mark T.

    This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.

  19. FGF23 Deficiency Leads to Mixed Hearing Loss and Middle Ear Malformation in Mice

    PubMed Central

    Lysaght, Andrew C.; Yuan, Quan; Fan, Yi; Kalwani, Neil; Caruso, Paul; Cunnane, MaryBeth; Lanske, Beate; Stanković, Konstantina M.

    2014-01-01

    Fibroblast growth factor 23 (FGF23) is a circulating hormone important in phosphate homeostasis. Abnormal serum levels of FGF23 result in systemic pathologies in humans and mice, including renal phosphate wasting diseases and hyperphosphatemia. We sought to uncover the role FGF23 plays in the auditory system due to shared molecular mechanisms and genetic pathways between ear and kidney development, the critical roles multiple FGFs play in auditory development and the known hearing phenotype in mice deficient in klotho (KL), a critical co-factor for FGF23 signaling. Using functional assessments of hearing, we demonstrate that Fgf mice are profoundly deaf. Fgf mice have moderate hearing loss above 20 kHz, consistent with mixed conductive and sensorineural pathology of both middle and inner ear origin. Histology and high-voltage X-ray computed tomography of Fgf mice demonstrate dysplastic bulla and ossicles; Fgf mice have near-normal morphology. The cochleae of mutant mice appear nearly normal on gross and microscopic inspection. In wild type mice, FGF23 is ubiquitously expressed throughout the cochlea. Measurements from Fgf mice do not match the auditory phenotype of Kl −/− mice, suggesting that loss of FGF23 activity impacts the auditory system via mechanisms at least partially independent of KL. Given the extensive middle ear malformations and the overlap of initiation of FGF23 activity and Eustachian tube development, this work suggests a possible role for FGF23 in otitis media. PMID:25243481

  20. Developmental Emergence of Phenotypes in the Auditory Brainstem Nuclei of Fmr1 Knockout Mice

    PubMed Central

    Rotschafer, Sarah E.

    2017-01-01

    Abstract Fragile X syndrome (FXS), the most common monogenic cause of autism, is often associated with hypersensitivity to sound. Several studies have shown abnormalities in the auditory brainstem in FXS; however, the emergence of these auditory phenotypes during development has not been described. Here, we investigated the development of phenotypes in FXS model [Fmr1 knockout (KO)] mice in the ventral cochlear nucleus (VCN), medial nucleus of the trapezoid body (MNTB), and lateral superior olive (LSO). We studied features of the brainstem known to be altered in FXS or Fmr1 KO mice, including cell size and expression of markers for excitatory (VGLUT) and inhibitory (VGAT) synapses. We found that cell size was reduced in the nuclei with different time courses. VCN cell size is normal until after hearing onset, while MNTB and LSO show decreases earlier. VGAT expression was elevated relative to VGLUT in the Fmr1 KO mouse MNTB by P6, before hearing onset. Because glial cells influence development and are altered in FXS, we investigated their emergence in the developing Fmr1 KO brainstem. The number of microglia developed normally in all three nuclei in Fmr1 KO mice, but we found elevated numbers of astrocytes in Fmr1 KO in VCN and LSO at P14. The results indicate that some phenotypes are evident before spontaneous or auditory activity, while others emerge later, and suggest that Fmr1 acts at multiple sites and time points in auditory system development. PMID:29291238

  1. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  2. Auditory Hypersensitivity in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Lucker, Jay R.

    2013-01-01

    A review of records was completed to determine whether children with auditory hypersensitivities have difficulty tolerating loud sounds due to auditory-system factors or some other factors not directly involving the auditory system. Records of 150 children identified as not meeting autism spectrum disorders (ASD) criteria and another 50 meeting…

  3. Objective Metric Based Assessments for Efficient Evaluation of Auditory Situation Awareness Characteristics of Tactical Communications and Protective Systems (TCAPS) and Augmented Hearing Protective Devices (HPDs)

    DTIC Science & Technology

    2015-11-30

    Assessments for Efficient Evaluation of Auditory Situation Awareness Characteristics of Tactical Communications and Protective Systems (TCAPS) and Augmented...Hearing Protective Devices (HPDs) W81XWH-13-C-0193 John G. Casali, Ph.D, CPE & Kichol Lee, Ph.D Auditory Systems Lab, Industrial and Systems ...Suite 1 JBSA Lackland, TX 78236-9908 Approved for public release: distribution unlimited. The Virginia Tech Auditory Systems Laboratory (ASL

  4. Auditory processing and morphological anomalies in medial geniculate nucleus of Cntnap2 mutant mice.

    PubMed

    Truong, Dongnhu T; Rendall, Amanda R; Castelluccio, Brian C; Eigsti, Inge-Marie; Fitch, R Holly

    2015-12-01

    Genetic epidemiological studies support a role for CNTNAP2 in developmental language disorders such as autism spectrum disorder, specific language impairment, and dyslexia. Atypical language development and function represent a core symptom of autism spectrum disorder (ASD), with evidence suggesting that aberrant auditory processing-including impaired spectrotemporal processing and enhanced pitch perception-may both contribute to an anomalous language phenotype. Investigation of gene-brain-behavior relationships in social and repetitive ASD symptomatology have benefited from experimentation on the Cntnap2 knockout (KO) mouse. However, auditory-processing behavior and effects on neural structures within the central auditory pathway have not been assessed in this model. Thus, this study examined whether auditory-processing abnormalities were associated with mutation of the Cntnap2 gene in mice. Cntnap2 KO mice were assessed on auditory-processing tasks including silent gap detection, embedded tone detection, and pitch discrimination. Cntnap2 knockout mice showed deficits in silent gap detection but a surprising superiority in pitch-related discrimination as compared with controls. Stereological analysis revealed a reduction in the number and density of neurons, as well as a shift in neuronal size distribution toward smaller neurons, in the medial geniculate nucleus of mutant mice. These findings are consistent with a central role for CNTNAP2 in the ontogeny and function of neural systems subserving auditory processing and suggest that developmental disruption of these neural systems could contribute to the atypical language phenotype seen in autism spectrum disorder. (c) 2015 APA, all rights reserved).

  5. The Rhythm of Perception: Entrainment to Acoustic Rhythms Induces Subsequent Perceptual Oscillation.

    PubMed

    Hickok, Gregory; Farahbod, Haleh; Saberi, Kourosh

    2015-07-01

    Acoustic rhythms are pervasive in speech, music, and environmental sounds. Recent evidence for neural codes representing periodic information suggests that they may be a neural basis for the ability to detect rhythm. Further, rhythmic information has been found to modulate auditory-system excitability, which provides a potential mechanism for parsing the acoustic stream. Here, we explored the effects of a rhythmic stimulus on subsequent auditory perception. We found that a low-frequency (3 Hz), amplitude-modulated signal induces a subsequent oscillation of the perceptual detectability of a brief nonperiodic acoustic stimulus (1-kHz tone); the frequency but not the phase of the perceptual oscillation matches the entrained stimulus-driven rhythmic oscillation. This provides evidence that rhythmic contexts have a direct influence on subsequent auditory perception of discrete acoustic events. Rhythm coding is likely a fundamental feature of auditory-system design that predates the development of explicit human enjoyment of rhythm in music or poetry. © The Author(s) 2015.

  6. Can You Hear Me Now? Musical Training Shapes Functional Brain Networks for Selective Auditory Attention and Hearing Speech in Noise

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2011-01-01

    Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636

  7. Decreased echolocation performance following high-frequency hearing loss in the false killer whale (Pseudorca crassidens).

    PubMed

    Kloepper, L N; Nachtigall, P E; Gisiner, R; Breese, M

    2010-11-01

    Toothed whales and dolphins possess a hypertrophied auditory system that allows for the production and hearing of ultrasonic signals. Although the fossil record provides information on the evolution of the auditory structures found in extant odontocetes, it cannot provide information on the evolutionary pressures leading to the hypertrophied auditory system. Investigating the effect of hearing loss may provide evidence for the reason for the development of high-frequency hearing in echolocating animals by demonstrating how high-frequency hearing assists in the functioning echolocation system. The discrimination abilities of a false killer whale (Pseudorca crassidens) were measured prior to and after documented high-frequency hearing loss. In 1992, the subject had good hearing and could hear at frequencies up to 100 kHz. In 2008, the subject had lost hearing at frequencies above 40 kHz. First in 1992, and then again in 2008, the subject performed an identical echolocation task, discriminating between machined hollow aluminum cylinder targets of differing wall thickness. Performances were recorded for individual target differences and compared between both experimental years. Performances on individual targets dropped between 1992 and 2008, with a maximum performance reduction of 36.1%. These data indicate that, with a loss in high-frequency hearing, there was a concomitant reduction in echolocation discrimination ability, and suggest that the development of a hypertrophied auditory system capable of hearing at ultrasonic frequencies evolved in response to pressures for fine-scale echolocation discrimination.

  8. Auditory function in children with Charcot-Marie-Tooth disease.

    PubMed

    Rance, Gary; Ryan, Monique M; Bayliss, Kristen; Gill, Kathryn; O'Sullivan, Caitlin; Whitechurch, Marny

    2012-05-01

    The peripheral manifestations of the inherited neuropathies are increasingly well characterized, but their effects upon cranial nerve function are not well understood. Hearing loss is recognized in a minority of children with this condition, but has not previously been systemically studied. A clear understanding of the prevalence and degree of auditory difficulties in this population is important as hearing impairment can impact upon speech/language development, social interaction ability and educational progress. The aim of this study was to investigate auditory pathway function, speech perception ability and everyday listening and communication in a group of school-aged children with inherited neuropathies. Twenty-six children with Charcot-Marie-Tooth disease confirmed by genetic testing and physical examination participated. Eighteen had demyelinating neuropathies (Charcot-Marie-Tooth type 1) and eight had the axonal form (Charcot-Marie-Tooth type 2). While each subject had normal or near-normal sound detection, individuals in both disease groups showed electrophysiological evidence of auditory neuropathy with delayed or low amplitude auditory brainstem responses. Auditory perception was also affected, with >60% of subjects with Charcot-Marie-Tooth type 1 and >85% of Charcot-Marie-Tooth type 2 suffering impaired processing of auditory temporal (timing) cues and/or abnormal speech understanding in everyday listening conditions.

  9. Newborn hearing screening update for midwifery practice.

    PubMed

    Narrigan, D

    2000-01-01

    Neonatal identification of congenital hearing impairment allows interventions during the first 3 years, the critical period for language and speech development. Two recently developed biophysical testing methods offer simple, accurate, and relatively inexpensive means to identify the one to three in 1,000 healthy newborns with hearing loss. Universal screening for auditory system integrity is advocated, because almost half of all newborns with hearing impairment have no risk factors associated with this impairment. Critics of universal screening cite the high rate of false positive tests (up to 7%), which increases program costs from follow-up and re-testing large numbers of infants to ensure identifying the few affected infants. As of early 2000, 24 states had introduced some type of auditory screening program, and the U.S. Congress had passed legislation with appropriations mandating state-based auditory screening for all newborns. Midwives practicing in states already mandating biophysical screening need to comply with their local requirements; those in other states may voluntarily incorporate new auditory test methods into practice.

  10. Noise exposure and oxidative balance in auditory and extra-auditory structures in adult and developing animals. Pharmacological approaches aimed to minimize its effects.

    PubMed

    Molina, S J; Miceli, M; Guelman, L R

    2016-07-01

    Noise coming from urban traffic, household appliances or discotheques might be as hazardous to the health of exposed people as occupational noise, because may likewise cause hearing loss, changes in hormonal, cardiovascular and immune systems and behavioral alterations. Besides, noise can affect sleep, work performance and productivity as well as communication skills. Moreover, exposure to noise can trigger an oxidative imbalance between reactive oxygen species (ROS) and the activity of antioxidant enzymes in different structures, which can contribute to tissue damage. In this review we systematized the information from reports concerning noise effects on cell oxidative balance in different tissues, focusing on auditory and non-auditory structures. We paid specific attention to in vivo studies, including results obtained in adult and developing subjects. Finally, we discussed the pharmacological strategies tested by different authors aimed to minimize the damaging effects of noise on living beings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  12. Enhanced Development of Auditory Change Detection in Musically Trained School-Aged Children: A Longitudinal Event-Related Potential Study

    ERIC Educational Resources Information Center

    Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Ojala, Pauliina; Huotilainen, Minna

    2014-01-01

    Adult musicians show superior auditory discrimination skills when compared to non-musicians. The enhanced auditory skills of musicians are reflected in the augmented amplitudes of their auditory event-related potential (ERP) responses. In the current study, we investigated longitudinally the development of auditory discrimination skills in…

  13. Delayed auditory pathway maturation and prematurity.

    PubMed

    Koenighofer, Martin; Parzefall, Thomas; Ramsebner, Reinhard; Lucas, Trevor; Frei, Klemens

    2015-06-01

    Hearing loss is the most common sensory disorder in developed countries and leads to a severe reduction in quality of life. In this uncontrolled case series, we evaluated the auditory development in patients suffering from congenital nonsyndromic hearing impairment related to preterm birth. Six patients delivered preterm (25th-35th gestational weeks) suffering from mild to profound congenital nonsyndromic hearing impairment, descending from healthy, nonconsanguineous parents and were evaluated by otoacoustic emissions, tympanometry, brainstem-evoked response audiometry, and genetic testing. All patients were treated with hearing aids, and one patient required cochlear implantation. One preterm infant (32nd gestational week) initially presented with a 70 dB hearing loss, accompanied by negative otoacoustic emissions and normal tympanometric findings. The patient was treated with hearing aids and displayed a gradual improvement in bilateral hearing that completely normalized by 14 months of age accompanied by the development of otoacoustic emission responses. Conclusions We present here for the first time a fully documented preterm patient with delayed auditory pathway maturation and normalization of hearing within 14 months of birth. Although rare, postpartum development of the auditory system should, therefore, be considered in the initial stages for treating preterm hearing impaired patients.

  14. Ontogenetic development of the inner ear saccule and utricle in the Lusitanian toadfish: Potential implications for auditory sensitivity.

    PubMed

    Chaves, Patrícia P; Valdoria, Ciara M C; Amorim, M Clara P; Vasconcelos, Raquel O

    2017-09-01

    Studies addressing structure-function relationships of the fish auditory system during development are sparse compared to other taxa. The Batrachoididae has become an important group to investigate mechanisms of auditory plasticity and evolution of auditory-vocal systems. A recent study reported ontogenetic improvements in the inner ear saccule sensitivity of the Lusitanian toadfish, Halobatrachus didactylus, but whether this results from changes in the sensory morphology remains unknown. We investigated how the macula and organization of auditory receptors in the saccule and utricle change during growth in this species. Inner ear sensory epithelia were removed from the end organs of previously PFA-fixed specimens, from non-vocal posthatch fry (<1.4 cm, standard length) to adults (>23 cm). Epithelia were phalloidin-stained and analysed for area, shape, number and orientation patterns of hair cells (HC), and number and size of saccular supporting cells (SC). Saccular macula area expanded 41x in total, and significantly more (relative to body length) among vocal juveniles (2.3-2.9 cm). Saccular HC number increased 25x but HC density decreased, suggesting that HC addition is slower relative to epithelial growth. While SC density decreased, SC apical area increased, contributing to the epithelial expansion. The utricule revealed increased HC density (striolar region) and less epithelial expansion (5x) with growth, contrasting with the saccule that may have a different developmental pattern due to its larger size and main auditory functions. Both macula shape and HC orientation patterns were already established in the posthatch fry and retained throughout growth in both end organs. We suggest that previously reported ontogenetic improvements in saccular sensitivity might be associated with changes in HC number (not density), size and/or molecular mechanisms controlling HC sensitivity. This is one of the first studies investigating the ontogenetic development of the saccule and utricle in a vocal fish and how it potentially relates to auditory enhancement for acoustic communication. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Auditory Cortex Basal Activity Modulates Cochlear Responses in Chinchillas

    PubMed Central

    León, Alex; Elgueda, Diego; Silva, María A.; Hamamé, Carlos M.; Delano, Paul H.

    2012-01-01

    Background The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. Methodology/Principal Findings Cochlear microphonics (CM), auditory-nerve compound action potentials (CAP) and auditory cortex evoked potentials (ACEP) were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments) and a permanent reduction in five chinchillas (lesion experiments). We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. Conclusions/Significance These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the obtained effects suggests that there are at least two functional pathways from the auditory cortex to the cochlea. PMID:22558383

  16. Developmental profiles of the intrinsic properties and synaptic function of auditory neurons in preterm and term baboon neonates.

    PubMed

    Kim, Sei Eun; Lee, Seul Yi; Blanco, Cynthia L; Kim, Jun Hee

    2014-08-20

    The human fetus starts to hear and undergoes major developmental changes in the auditory system during the third trimester of pregnancy. Although there are significant data regarding development of the auditory system in rodents, changes in intrinsic properties and synaptic function of auditory neurons in developing primate brain at hearing onset are poorly understood. We performed whole-cell patch-clamp recordings of principal neurons in the medial nucleus of trapezoid body (MNTB) in preterm and term baboon brainstem slices to study the structural and functional maturation of auditory synapses. Each MNTB principal neuron received an excitatory input from a single calyx of Held terminal, and this one-to-one pattern of innervation was already formed in preterm baboons delivered at 67% of normal gestation. There was no difference in frequency or amplitude of spontaneous excitatory postsynaptic synaptic currents between preterm and term MNTB neurons. In contrast, the frequency of spontaneous GABA(A)/glycine receptor-mediated inhibitory postsynaptic synaptic currents, which were prevalent in preterm MNTB neurons, was significantly reduced in term MNTB neurons. Preterm MNTB neurons had a higher input resistance than term neurons and fired in bursts, whereas term MNTB neurons fired a single action potential in response to suprathreshold current injection. The maturation of intrinsic properties and dominance of excitatory inputs in the primate MNTB allow it to take on its mature role as a fast and reliable relay synapse. Copyright © 2014 the authors 0270-6474/14/3411399-06$15.00/0.

  17. Impact of peripheral hearing loss on top-down auditory processing.

    PubMed

    Lesicko, Alexandria M H; Llano, Daniel A

    2017-01-01

    The auditory system consists of an intricate set of connections interposed between hierarchically arranged nuclei. The ascending pathways carrying sound information from the cochlea to the auditory cortex are, predictably, altered in instances of hearing loss resulting from blockage or damage to peripheral auditory structures. However, hearing loss-induced changes in descending connections that emanate from higher auditory centers and project back toward the periphery are still poorly understood. These pathways, which are the hypothesized substrate of high-level contextual and plasticity cues, are intimately linked to the ascending stream, and are thereby also likely to be influenced by auditory deprivation. In the current report, we review both the human and animal literature regarding changes in top-down modulation after peripheral hearing loss. Both aged humans and cochlear implant users are able to harness the power of top-down cues to disambiguate corrupted sounds and, in the case of aged listeners, may rely more heavily on these cues than non-aged listeners. The animal literature also reveals a plethora of structural and functional changes occurring in multiple descending projection systems after peripheral deafferentation. These data suggest that peripheral deafferentation induces a rebalancing of bottom-up and top-down controls, and that it will be necessary to understand the mechanisms underlying this rebalancing to develop better rehabilitation strategies for individuals with peripheral hearing loss. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    PubMed

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  19. Artemis 123: development of a whole-head infant and young child MEG system

    PubMed Central

    Roberts, Timothy P. L.; Paulson, Douglas N.; Hirschkoff, Eugene; Pratt, Kevin; Mascarenas, Anthony; Miller, Paul; Han, Mengali; Caffrey, Jason; Kincade, Chuck; Power, Bill; Murray, Rebecca; Chow, Vivian; Fisk, Charlie; Ku, Matthew; Chudnovskaya, Darina; Dell, John; Golembski, Rachel; Lam, Peter; Blaskey, Lisa; Kuschner, Emily; Bloy, Luke; Gaetz, William; Edgar, J. Christopher

    2014-01-01

    Background: A major motivation in designing the new infant and child magnetoencephalography (MEG) system described in this manuscript is the premise that electrophysiological signatures (resting activity and evoked responses) may serve as biomarkers of neurodevelopmental disorders, with neuronal abnormalities in conditions such as autism spectrum disorder (ASD) potentially detectable early in development. Whole-head MEG systems are generally optimized/sized for adults. Since magnetic field produced by neuronal currents decreases as a function of distance2 and infants and young children have smaller head sizes (and thus increased brain-to-sensor distance), whole-head adult MEG systems do not provide optimal signal-to-noise in younger individuals. This spurred development of a whole-head infant and young child MEG system – Artemis 123. Methods:In addition to describing the design of the Artemis 123, the focus of this manuscript is the use of Artemis 123 to obtain auditory evoked neuromagnetic recordings and resting-state data in young children. Data were collected from a 14-month-old female, an 18-month-old female, and a 48-month-old male. Phantom data are also provided to show localization accuracy. Results:Examination of Artemis 123 auditory data showed generalizability and reproducibility, with auditory responses observed in all participants. The auditory MEG measures were also found to be manipulable, exhibiting sensitivity to tone frequency. Furthermore, there appeared to be a predictable sensitivity of evoked components to development, with latencies decreasing with age. Examination of resting-state data showed characteristic oscillatory activity. Finally, phantom data showed that dipole sources could be localized with an error less than 0.5 cm. Conclusions:Artemis 123 allows efficient recording of high-quality whole-head MEG in infants four years and younger. Future work will involve examining the feasibility of obtaining somatosensory and visual recordings in similar-age children as well as obtaining recordings from younger infants. Thus, the Artemis 123 offers the promise of detecting earlier diagnostic signatures in such neurodevelopmental disorders. PMID:24624069

  20. Usage of drip drops as stimuli in an auditory P300 BCI paradigm.

    PubMed

    Huang, Minqiang; Jin, Jing; Zhang, Yu; Hu, Dewen; Wang, Xingyu

    2018-02-01

    Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP ( p  < 0.05, Wilcoxon signed test; p  < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty ( p  < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.

  1. Auditory priming improves neural synchronization in auditory-motor entrainment.

    PubMed

    Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J

    2018-05-22

    Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  3. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system

    PubMed Central

    Schrode, Katrina M.; Bee, Mark A.

    2015-01-01

    ABSTRACT Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male–male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  4. Auditory Learning. Dimensions in Early Learning Series.

    ERIC Educational Resources Information Center

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  5. Divergent evolutionary rates in vertebrate and mammalian specific conserved non-coding elements (CNEs) in echolocating mammals.

    PubMed

    Davies, Kalina T J; Tsagkogeorga, Georgia; Rossiter, Stephen J

    2014-12-19

    The majority of DNA contained within vertebrate genomes is non-coding, with a certain proportion of this thought to play regulatory roles during development. Conserved Non-coding Elements (CNEs) are an abundant group of putative regulatory sequences that are highly conserved across divergent groups and thus assumed to be under strong selective constraint. Many CNEs may contain regulatory factor binding sites, and their frequent spatial association with key developmental genes - such as those regulating sensory system development - suggests crucial roles in regulating gene expression and cellular patterning. Yet surprisingly little is known about the molecular evolution of CNEs across diverse mammalian taxa or their role in specific phenotypic adaptations. We examined 3,110 vertebrate-specific and ~82,000 mammalian-specific CNEs across 19 and 9 mammalian orders respectively, and tested for changes in the rate of evolution of CNEs located in the proximity of genes underlying the development or functioning of auditory systems. As we focused on CNEs putatively associated with genes underlying the development/functioning of auditory systems, we incorporated echolocating taxa in our dataset because of their highly specialised and derived auditory systems. Phylogenetic reconstructions of concatenated CNEs broadly recovered accepted mammal relationships despite high levels of sequence conservation. We found that CNE substitution rates were highest in rodents and lowest in primates, consistent with previous findings. Comparisons of CNE substitution rates from several genomic regions containing genes linked to auditory system development and hearing revealed differences between echolocating and non-echolocating taxa. Wider taxonomic sampling of four CNEs associated with the homeobox genes Hmx2 and Hmx3 - which are required for inner ear development - revealed family-wise variation across diverse bat species. Specifically within one family of echolocating bats that utilise frequency-modulated echolocation calls varying widely in frequency and intensity high levels of sequence divergence were found. Levels of selective constraint acting on CNEs differed both across genomic locations and taxa, with observed variation in substitution rates of CNEs among bat species. More work is needed to determine whether this variation can be linked to echolocation, and wider taxonomic sampling is necessary to fully document levels of conservation in CNEs across diverse taxa.

  6. Noise over-exposure alters long-term somatosensory-auditory processing in the dorsal cochlear nucleus – possible basis for tinnitus-related hyperactivity?

    PubMed Central

    Dehmel, Susanne; Pradhan, Shashwati; Koehler, Seth; Bledsoe, Sanford; Shore, Susan

    2012-01-01

    The dorsal cochlear nucleus (DCN) is the first neural site of bimodal auditory-somatosensory integration. Previous studies have shown that stimulation of somatosensory pathways results in immediate suppression or enhancement of subsequent acoustically-evoked discharges. In the unimpaired auditory system suppression predominates. However, damage to the auditory input pathway leads to enhancement of excitatory somatosensory inputs to the cochlear nucleus, changing their effects on DCN neurons (Zeng et al., 2009; Shore et al., 2008). Given the well described connection between the somatosensory system and tinnitus in patients we sought to determine if plastic changes in long lasting bimodal somatosensory-auditory processing accompany tinnitus. Here we demonstrate for the first time in vivo long-term effects of somatosensory inputs on acoustically-evoked discharges of DCN neurons in guinea pigs. The effects of trigeminal nucleus stimulation are compared between normal-hearing animals and animals overexposed with narrow band noise and behaviorally tested for tinnitus. The noise exposure resulted in a temporary threshold shift (TTS) in auditory brainstem responses but a persistent increase in spontaneous and sound-evoked DCN unit firing rates and increased steepness of rate-level functions (RLFs). Rate increases were especially prominent in buildup units. The long-term somatosensory enhancement of sound-evoked responses was strengthened while suppressive effects diminished in noise-exposed animals, especially those that developed tinnitus. Damage to the auditory nerve (ANF) is postulated to trigger compensatory long-term synaptic plasticity of somatosensory inputs that might be an important underlying mechanism for tinnitus generation. PMID:22302808

  7. Brainstem origins for cortical 'what' and 'where' pathways in the auditory system.

    PubMed

    Kraus, Nina; Nicol, Trent

    2005-04-01

    We have developed a data-driven conceptual framework that links two areas of science: the source-filter model of acoustics and cortical sensory processing streams. The source-filter model describes the mechanics behind speech production: the identity of the speaker is carried largely in the vocal cord source and the message is shaped by the ever-changing filters of the vocal tract. Sensory processing streams, popularly called 'what' and 'where' pathways, are well established in the visual system as a neural scheme for separately carrying different facets of visual objects, namely their identity and their position/motion, to the cortex. A similar functional organization has been postulated in the auditory system. Both speaker identity and the spoken message, which are simultaneously conveyed in the acoustic structure of speech, can be disentangled into discrete brainstem response components. We argue that these two response classes are early manifestations of auditory 'what' and 'where' streams in the cortex. This brainstem link forges a new understanding of the relationship between the acoustics of speech and cortical processing streams, unites two hitherto separate areas in science, and provides a model for future investigations of auditory function.

  8. Hearing in Insects.

    PubMed

    Göpfert, Martin C; Hennig, R Matthias

    2016-01-01

    Insect hearing has independently evolved multiple times in the context of intraspecific communication and predator detection by transforming proprioceptive organs into ears. Research over the past decade, ranging from the biophysics of sound reception to molecular aspects of auditory transduction to the neuronal mechanisms of auditory signal processing, has greatly advanced our understanding of how insects hear. Apart from evolutionary innovations that seem unique to insect hearing, parallels between insect and vertebrate auditory systems have been uncovered, and the auditory sensory cells of insects and vertebrates turned out to be evolutionarily related. This review summarizes our current understanding of insect hearing. It also discusses recent advances in insect auditory research, which have put forward insect auditory systems for studying biological aspects that extend beyond hearing, such as cilium function, neuronal signal computation, and sensory system evolution.

  9. Weak Responses to Auditory Feedback Perturbation during Articulation in Persons Who Stutter: Evidence for Abnormal Auditory-Motor Transformation

    PubMed Central

    Cai, Shanqing; Beal, Deryk S.; Ghosh, Satrajit S.; Tiede, Mark K.; Guenther, Frank H.; Perkell, Joseph S.

    2012-01-01

    Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking) functions abnormally in the speech motor systems of persons who stutter (PWS). Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants’ compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls’ and had close-to-normal latencies (∼150 ms), but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05). Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands. PMID:22911857

  10. A robotic voice simulator and the interactive training for hearing-impaired people.

    PubMed

    Sawada, Hideyuki; Kitani, Mitsuki; Hayashi, Yasumori

    2008-01-01

    A talking and singing robot which adaptively learns the vocalization skill by means of an auditory feedback learning algorithm is being developed. The robot consists of motor-controlled vocal organs such as vocal cords, a vocal tract and a nasal cavity to generate a natural voice imitating a human vocalization. In this study, the robot is applied to the training system of speech articulation for the hearing-impaired, because the robot is able to reproduce their vocalization and to teach them how it is to be improved to generate clear speech. The paper briefly introduces the mechanical construction of the robot and how it autonomously acquires the vocalization skill in the auditory feedback learning by listening to human speech. Then the training system is described, together with the evaluation of the speech training by auditory impaired people.

  11. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    PubMed

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. © 2015. Published by The Company of Biologists Ltd.

  12. Hearing and the round goby: Understanding the auditory system of the round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Belanger, Andrea J.; Higgs, Dennis M.

    2005-04-01

    The round goby (Neogobius melanostomus), is an invasive species in the Great Lakes watershed. Adult round gobies show behavioral responses to conspecific vocalizations but physiological investigations have not yet been conducted to quantify their hearing abilities. We have been examining the physiological and morphological development of the auditory system in the round goby. Various frequencies (100 Hz to 800 Hz and conspecific sounds), at various intensities (120 dB to 170 dB re 1 Pa) were presented to juveniles and adults and their auditory brain-stem responses (ABR) were recorded. Round gobies only respond physiologically to tones from 100-600 Hz, with threshold varying between 145 to 155 dB re 1 Pa. The response threshold to conspecific sounds was 140 dB re 1 Pa. There was no significant difference in auditory threshold between sizes of fish for either tones or conspecific sounds. Saccular epithelia were stained using phalloidin and there was a trend towards an increase in both hair cell number and density with an increase in fish size. These results represent a first attempt to quantify auditory abilities in this invasive species. This is an important step in understanding their reproductive physiology, which could potentially aid in their population control. [Funded by NSERC.

  13. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  14. Maladaptive plasticity in tinnitus-triggers, mechanisms and treatment

    PubMed Central

    Shore, Susan E; Roberts, Larry E.; Langguth, Berthold

    2016-01-01

    Tinnitus is a phantom auditory sensation that reduces quality of life for millions worldwide and for which there is no medical cure. Most cases are associated with hearing loss caused by the aging process or noise exposure. Because exposure to loud recreational sound is common among youthful populations, young persons are at increasing risk. Head or neck injuries can also trigger the development of tinnitus, as altered somatosensory input can affect auditory pathways and lead to tinnitus or modulate its intensity. Emotional and attentional state may play a role in tinnitus development and maintenance via top-down mechanisms. Thus, military in combat are particularly at risk due to combined hearing loss, somatosensory system disturbances and emotional stress. Neuroscience research has identified neural changes related to tinnitus that commence at the cochlear nucleus and extend to the auditory cortex and brain regions beyond. Maladaptive neural plasticity appears to underlie these neural changes, as it results in increased spontaneous firing rates and synchrony among neurons in central auditory structures that may generate the phantom percept. This review highlights the links between animal and human studies, including several therapeutic approaches that have been developed, which aim to target the neuroplastic changes underlying tinnitus. PMID:26868680

  15. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders.

    PubMed

    Le Bel, Ronald M; Pineda, Jaime A; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD).

  16. NANOCI-Nanotechnology Based Cochlear Implant With Gapless Interface to Auditory Neurons.

    PubMed

    Senn, Pascal; Roccio, Marta; Hahnewald, Stefan; Frick, Claudia; Kwiatkowska, Monika; Ishikawa, Masaaki; Bako, Peter; Li, Hao; Edin, Fredrik; Liu, Wei; Rask-Andersen, Helge; Pyykkö, Ilmari; Zou, Jing; Mannerström, Marika; Keppner, Herbert; Homsy, Alexandra; Laux, Edith; Llera, Miguel; Lellouche, Jean-Paul; Ostrovsky, Stella; Banin, Ehud; Gedanken, Aharon; Perkas, Nina; Wank, Ute; Wiesmüller, Karl-Heinz; Mistrík, Pavel; Benav, Heval; Garnham, Carolyn; Jolly, Claude; Gander, Filippo; Ulrich, Peter; Müller, Marcus; Löwenheim, Hubert

    2017-09-01

    : Cochlear implants (CI) restore functional hearing in the majority of deaf patients. Despite the tremendous success of these devices, some limitations remain. The bottleneck for optimal electrical stimulation with CI is caused by the anatomical gap between the electrode array and the auditory neurons in the inner ear. As a consequence, current devices are limited through 1) low frequency resolution, hence sub-optimal sound quality and 2), large stimulation currents, hence high energy consumption (responsible for significant battery costs and for impeding the development of fully implantable systems). A recently completed, multinational and interdisciplinary project called NANOCI aimed at overcoming current limitations by creating a gapless interface between auditory nerve fibers and the cochlear implant electrode array. This ambitious goal was achieved in vivo by neurotrophin-induced attraction of neurites through an intracochlear gel-nanomatrix onto a modified nanoCI electrode array located in the scala tympani of deafened guinea pigs. Functionally, the gapless interface led to lower stimulation thresholds and a larger dynamic range in vivo, and to reduced stimulation energy requirement (up to fivefold) in an in vitro model using auditory neurons cultured on multi-electrode arrays. In conclusion, the NANOCI project yielded proof of concept that a gapless interface between auditory neurons and cochlear implant electrode arrays is feasible. These findings may be of relevance for the development of future CI systems with better sound quality and performance and lower energy consumption. The present overview/review paper summarizes the NANOCI project history and highlights achievements of the individual work packages.

  17. A Brain System for Auditory Working Memory.

    PubMed

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  18. Representation of complex vocalizations in the Lusitanian toadfish auditory system: evidence of fine temporal, frequency and amplitude discrimination

    PubMed Central

    Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich

    2011-01-01

    Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044

  19. Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments

    NASA Astrophysics Data System (ADS)

    Horowitz, Seth S.; Simmons, Andrea M.; Blue, China

    2005-09-01

    Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.

  20. Development of auditory sensory memory from 2 to 6 years: an MMN study.

    PubMed

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-08-01

    Short-term storage of auditory information is thought to be a precondition for cognitive development, and deficits in short-term memory are believed to underlie learning disabilities and specific language disorders. We examined the development of the duration of auditory sensory memory in normally developing children between the ages of 2 and 6 years. To probe the lifetime of auditory sensory memory we elicited the mismatch negativity (MMN), a component of the late auditory evoked potential, with tone stimuli of two different frequencies presented with various interstimulus intervals between 500 and 5,000 ms. Our findings suggest that memory traces for tone characteristics have a duration of 1-2 s in 2- and 3-year-old children, more than 2 s in 4-year-olds and 3-5 s in 6-year-olds. The results provide insights into the maturational processes involved in auditory sensory memory during the sensitive period of cognitive development.

  1. The role of auditory and kinaesthetic feedback mechanisms on phonatory stability in children.

    PubMed

    Rathna Kumar, S B; Azeem, Suhail; Choudhary, Abhishek Kumar; Prakash, S G R

    2013-12-01

    Auditory feedback plays an important role in phonatory control. When auditory feedback is disrupted, various changes are observed in vocal motor control. Vocal intensity and fundamental frequency (F0) levels tend to increase in response to auditory masking. Because of the close reflexive links between the auditory and phonatory systems, it is likely that phonatory stability may be disrupted when auditory feedback is disrupted or altered. However, studies on phonatory stability under auditory masking condition in adult subjects showed that most of the subjects maintained normal levels of phonatory stability. The authors in the earlier investigations suggested that auditory feedback is not the sole contributor to vocal motor control and phonatory stability, a complex neuromuscular reflex system known as kinaesthetic feedback may play a role in controlling phonatory stability when auditory feedback is disrupted or lacking. This proposes the need to further investigate this phenomenon as to whether children show similar patterns of phonatory stability under auditory masking since their neuromotor systems are still at developmental stage, less mature and are less resistant to altered auditory feedback than adults. A total of 40 normal hearing and speaking children (20 male and 20 female) between the age group of 6 and 8 years participated as subjects. The acoustic parameters such as shimmer, jitter and harmonic-to-noise ratio (HNR) were measures and compared between no masking condition (0 dB ML) and masking condition (90 dB ML). Despite the neuromotor systems being less mature in children and less resistant than adults to altered auditory feedback, most of the children in the study demonstrated increased phonatory stability which was reflected by reduced shimmer, jitter and increased HNR values. This study implicates that most of the children demonstrate well established patterns of kinaesthetic feedback, which might have allowed them to maintain normal levels of vocal motor control even in the presence of disturbed auditory feedback. Hence, it can be concluded that children also exhibit kinaesthetic feedback mechanism to control phonatory stability when auditory feedback is disrupted which in turn highlights the importance of kinaesthetic feedback to be included in the therapeutic/intervention approaches for children with hearing and neurogenic speech deficits.

  2. Virtual acoustics displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-01-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  3. Virtual acoustics displays

    NASA Astrophysics Data System (ADS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-03-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  4. Present and past: Can writing abilities in school children be associated with their auditory discrimination capacities in infancy?

    PubMed

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D

    2015-12-01

    Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A possible role for a paralemniscal auditory pathway in the coding of slow temporal information

    PubMed Central

    Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina

    2010-01-01

    Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680

  6. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  7. Listening to Another Sense: Somatosensory Integration in the Auditory System

    PubMed Central

    Wu, Calvin; Stefanescu, Roxana A.; Martel, David T.

    2014-01-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems, and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body, and the auditory cortex. In this review, we explore the process of multisensory integration from 1) anatomical (inputs and connections), 2) physiological (cellular responses), 3) functional, and 4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing, and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698

  8. Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat.

    PubMed

    Razak, Khaleel A; Fuzessery, Zoltan M

    2015-10-01

    Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. © 2014 Wiley Periodicals, Inc.

  9. Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat

    PubMed Central

    Razak, Khaleel A.; Fuzessery, Zoltan M.

    2014-01-01

    Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations, or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. PMID:25142131

  10. Music and language: relations and disconnections.

    PubMed

    Kraus, Nina; Slater, Jessica

    2015-01-01

    Music and language provide an important context in which to understand the human auditory system. While they perform distinct and complementary communicative functions, music and language are both rooted in the human desire to connect with others. Since sensory function is ultimately shaped by what is biologically important to the organism, the human urge to communicate has been a powerful driving force in both the evolution of auditory function and the ways in which it can be changed by experience within an individual lifetime. This chapter emphasizes the highly interactive nature of the auditory system as well as the depth of its integration with other sensory and cognitive systems. From the origins of music and language to the effects of auditory expertise on the neural encoding of sound, we consider key themes in auditory processing, learning, and plasticity. We emphasize the unique role of the auditory system as the temporal processing "expert" in the brain, and explore relationships between communication and cognition. We demonstrate how experience with music and language can have a significant impact on underlying neural function, and that auditory expertise strengthens some of the very same aspects of sound encoding that are deficient in impaired populations. © 2015 Elsevier B.V. All rights reserved.

  11. The effects of early auditory-based intervention on adult bilateral cochlear implant outcomes.

    PubMed

    Lim, Stacey R

    2017-09-01

    The goal of this exploratory study was to determine the types of improvement that sequentially implanted auditory-verbal and auditory-oral adults with prelingual and childhood hearing loss received in bilateral listening conditions, compared to their best unilateral listening condition. Five auditory-verbal adults and five auditory-oral adults were recruited for this study. Participants were seated in the center of a 6-loudspeaker array. BKB-SIN sentences were presented from 0° azimuth, while multi-talker babble was presented from various loudspeakers. BKB-SIN scores in bilateral and the best unilateral listening conditions were compared to determine the amount of improvement gained. As a group, the participants had improved speech understanding scores in the bilateral listening condition. Although not statistically significant, the auditory-verbal group tended to have greater speech understanding with greater levels of competing background noise, compared to the auditory-oral participants. Bilateral cochlear implantation provides individuals with prelingual and childhood hearing loss with improved speech understanding in noise. A higher emphasis on auditory development during the critical language development years may add to increased speech understanding in adulthood. However, other demographic factors such as age or device characteristics must also be considered. Although both auditory-verbal and auditory-oral approaches emphasize spoken language development, they emphasize auditory development to different degrees. This may affect cochlear implant (CI) outcomes. Further consideration should be made in future auditory research to determine whether these differences contribute to performance outcomes. Additional investigation with a larger participant pool, controlled for effects of age and CI devices and processing strategies, would be necessary to determine whether language learning approaches are associated with different levels of speech understanding performance.

  12. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    PubMed

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.

  13. Hearing loss and the central auditory system: Implications for hearing aids

    NASA Astrophysics Data System (ADS)

    Frisina, Robert D.

    2003-04-01

    Hearing loss can result from disorders or damage to the ear (peripheral auditory system) or the brain (central auditory system). Here, the basic structure and function of the central auditory system will be highlighted as relevant to cases of permanent hearing loss where assistive devices (hearing aids) are called for. The parts of the brain used for hearing are altered in two basic ways in instances of hearing loss: (1) Damage to the ear can reduce the number and nature of input channels that the brainstem receives from the ear, causing plasticity of the central auditory system. This plasticity may partially compensate for the peripheral loss, or add new abnormalities such as distorted speech processing or tinnitus. (2) In some situations, damage to the brain can occur independently of the ear, as may occur in cases of head trauma, tumors or aging. Implications of deficits to the central auditory system for speech perception in noise, hearing aid use and future innovative circuit designs will be provided to set the stage for subsequent presentations in this special educational session. [Work supported by NIA-NIH Grant P01 AG09524 and the International Center for Hearing & Speech Research, Rochester, NY.

  14. Probing sensorimotor integration during musical performance.

    PubMed

    Furuya, Shinichi; Furukawa, Yuta; Uehara, Kazumasa; Oku, Takanori

    2018-03-10

    An integration of afferent sensory information from the visual, auditory, and proprioceptive systems into execution and update of motor programs plays crucial roles in control and acquisition of skillful sequential movements in musical performance. However, conventional behavioral and neurophysiological techniques that have been applied to study simplistic motor behaviors limit elucidating online sensorimotor integration processes underlying skillful musical performance. Here, we propose two novel techniques that were developed to investigate the roles of auditory and proprioceptive feedback in piano performance. First, a closed-loop noninvasive brain stimulation system that consists of transcranial magnetic stimulation, a motion sensor, and a microcomputer enabled to assess time-varying cortical processes subserving auditory-motor integration during piano playing. Second, a force-field system capable of manipulating the weight of a piano key allowed for characterizing movement adaptation based on the feedback obtained, which can shed light on the formation of an internal representation of the piano. Results of neurophysiological and psychophysics experiments provided evidence validating these systems as effective means for disentangling computational and neural processes of sensorimotor integration in musical performance. © 2018 New York Academy of Sciences.

  15. Auditory processing deficits in individuals with primary open-angle glaucoma.

    PubMed

    Rance, Gary; O'Hare, Fleur; O'Leary, Stephen; Starr, Arnold; Ly, Anna; Cheng, Belinda; Tomlin, Dani; Graydon, Kelley; Chisari, Donella; Trounce, Ian; Crowston, Jonathan

    2012-01-01

    The high energy demand of the auditory and visual pathways render these sensory systems prone to diseases that impair mitochondrial function. Primary open-angle glaucoma, a neurodegenerative disease of the optic nerve, has recently been associated with a spectrum of mitochondrial abnormalities. This study sought to investigate auditory processing in individuals with open-angle glaucoma. DESIGN/STUDY SAMPLE: Twenty-seven subjects with open-angle glaucoma underwent electrophysiologic (auditory brainstem response), auditory temporal processing (amplitude modulation detection), and speech perception (monosyllabic words in quiet and background noise) assessment in each ear. A cohort of age, gender and hearing level matched control subjects was also tested. While the majority of glaucoma subjects in this study demonstrated normal auditory function, there were a significant number (6/27 subjects, 22%) who showed abnormal auditory brainstem responses and impaired auditory perception in one or both ears. The finding that a significant proportion of subjects with open-angle glaucoma presented with auditory dysfunction provides evidence of systemic neuronal susceptibility. Affected individuals may suffer significant communication difficulties in everyday listening situations.

  16. Establishing the Response of Low Frequency Auditory Filters

    NASA Technical Reports Server (NTRS)

    Rafaelof, Menachem; Christian, Andrew; Shepherd, Kevin; Rizzi, Stephen; Stephenson, James

    2017-01-01

    The response of auditory filters is central to frequency selectivity of sound by the human auditory system. This is true especially for realistic complex sounds that are often encountered in many applications such as modeling the audibility of sound, voice recognition, noise cancelation, and the development of advanced hearing aid devices. The purpose of this study was to establish the response of low frequency (below 100Hz) auditory filters. Two experiments were designed and executed; the first was to measure subject's hearing threshold for pure tones (at 25, 31.5, 40, 50, 63 and 80 Hz), and the second was to measure the Psychophysical Tuning Curves (PTCs) at two signal frequencies (Fs= 40 and 63Hz). Experiment 1 involved 36 subjects while experiment 2 used 20 subjects selected from experiment 1. Both experiments were based on a 3-down 1-up 3AFC adaptive staircase test procedure using either a variable level narrow-band noise masker or a tone. A summary of the results includes masked threshold data in form of PTCs, the response of auditory filters, their distribution, and comparison with similar recently published data.

  17. A corollary discharge maintains auditory sensitivity during sound production

    NASA Astrophysics Data System (ADS)

    Poulet, James F. A.; Hedwig, Berthold

    2002-08-01

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  18. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  19. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment.

    PubMed

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  20. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    PubMed Central

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus—tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice. PMID:22754508

  1. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    PubMed

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  2. Adenosine and the Auditory System

    PubMed Central

    Vlajkovic, Srdjan M; Housley, Gary D; Thorne, Peter R

    2009-01-01

    Adenosine is a signalling molecule that modulates cellular activity in the central nervous system and peripheral organs via four G protein-coupled receptors designated A1, A2A, A2B, and A3. This review surveys the literature on the role of adenosine in auditory function, particularly cochlear function and its protection from oxidative stress. The specific tissue distribution of adenosine receptors in the mammalian cochlea implicates adenosine signalling in sensory transduction and auditory neurotransmission although functional studies have demonstrated that adenosine stimulates cochlear blood flow, but does not alter the resting and sound-evoked auditory potentials. An interest in a potential otoprotective role for adenosine has recently evolved, fuelled by the capacity of A1 adenosine receptors to prevent cochlear injury caused by acoustic trauma and ototoxic drugs. The balance between A1 and A2A receptors is conceived as critical for cochlear response to oxidative stress, which is an underlying mechanism of the most common inner ear pathologies (e.g. noise-induced and age-related hearing loss, drug ototoxicity). Enzymes involved in adenosine metabolism, adenosine kinase and adenosine deaminase, are also emerging as attractive targets for controlling oxidative stress in the cochlea. Other possible targets include ectonucleotidases that generate adenosine from extracellular ATP, and nucleoside transporters, which regulate adenosine concentrations on both sides of the plasma membrane. Developments of selective adenosine receptor agonists and antagonists that can cross the blood-cochlea barrier are bolstering efforts to develop therapeutic interventions aimed at ameliorating cochlear injury. Manipulations of the adenosine signalling system thus hold significant promise in the therapeutic management of oxidative stress in the cochlea. PMID:20190966

  3. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  4. Toward a dual-learning systems model of speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Koslov, Seth R.; Maddox, W. T.

    2014-01-01

    More than two decades of work in vision posits the existence of dual-learning systems of category learning. The reflective system uses working memory to develop and test rules for classifying in an explicit fashion, while the reflexive system operates by implicitly associating perception with actions that lead to reinforcement. Dual-learning systems models hypothesize that in learning natural categories, learners initially use the reflective system and, with practice, transfer control to the reflexive system. The role of reflective and reflexive systems in auditory category learning and more specifically in speech category learning has not been systematically examined. In this article, we describe a neurobiologically constrained dual-learning systems theoretical framework that is currently being developed in speech category learning and review recent applications of this framework. Using behavioral and computational modeling approaches, we provide evidence that speech category learning is predominantly mediated by the reflexive learning system. In one application, we explore the effects of normal aging on non-speech and speech category learning. Prominently, we find a large age-related deficit in speech learning. The computational modeling suggests that older adults are less likely to transition from simple, reflective, unidimensional rules to more complex, reflexive, multi-dimensional rules. In a second application, we summarize a recent study examining auditory category learning in individuals with elevated depressive symptoms. We find a deficit in reflective-optimal and an enhancement in reflexive-optimal auditory category learning. Interestingly, individuals with elevated depressive symptoms also show an advantage in learning speech categories. We end with a brief summary and description of a number of future directions. PMID:25132827

  5. Neurophysiologic measures of auditory function in fish consumers: associations with long chain polyunsaturated fatty acids and methylmercury.

    PubMed

    Dziorny, Adam C; Orlando, Mark S; Strain, J J; Davidson, Philip W; Myers, Gary J

    2013-09-01

    Determining if associations exist between child neurodevelopment and environmental exposures, especially low level or background ones, is challenging and dependent upon being able to measure specific and sensitive endpoints. Psychometric or behavioral measures of CNS function have traditionally been used in such studies, but do have some limitations. Auditory neurophysiologic measures examine different nervous system structures and mechanisms, have fewer limitations, can more easily be quantified, and might be helpful additions to testing. To date, their use in human epidemiological studies has been limited. We reviewed the use of auditory brainstem responses (ABR) and otoacoustic emissions (OAE) in studies designed to determine the relationship of exposures to methyl mercury (MeHg) and nutrients from fish consumption with neurological development. We included studies of experimental animals and humans in an effort to better understand the possible benefits and risks of fish consumption. We reviewed the literature on the use of ABR and OAE to measure associations with environmental exposures that result from consuming a diet high in fish. We focused specifically on long chain polyunsaturated fatty acids (LCPUFA) and MeHg. We performed a comprehensive review of relevant studies using web-based search tools and appropriate search terms. Gestational exposure to both LCPUFA and MeHg has been reported to influence the developing auditory system. In experimental studies supplemental LCPUFA is reported to prolong ABR latencies and human studies also suggest an association. Experimental studies of acute and gestational MeHg exposure are reported to prolong ABR latencies and impair hair cell function. In humans, MeHg exposure is reported to prolong ABR latencies, but the impact on hair cell function is unknown. The auditory system can provide objective measures and may be useful in studying exposures to nutrients and toxicants and whether they are associated with children's neurodevelopment. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Cross-modal plasticity in developmental and age-related hearing loss: Clinical implications.

    PubMed

    Glick, Hannah; Sharma, Anu

    2017-01-01

    This review explores cross-modal cortical plasticity as a result of auditory deprivation in populations with hearing loss across the age spectrum, from development to adulthood. Cross-modal plasticity refers to the phenomenon when deprivation in one sensory modality (e.g. the auditory modality as in deafness or hearing loss) results in the recruitment of cortical resources of the deprived modality by intact sensory modalities (e.g. visual or somatosensory systems). We discuss recruitment of auditory cortical resources for visual and somatosensory processing in deafness and in lesser degrees of hearing loss. We describe developmental cross-modal re-organization in the context of congenital or pre-lingual deafness in childhood and in the context of adult-onset, age-related hearing loss, with a focus on how cross-modal plasticity relates to clinical outcomes. We provide both single-subject and group-level evidence of cross-modal re-organization by the visual and somatosensory systems in bilateral, congenital deafness, single-sided deafness, adults with early-stage, mild-moderate hearing loss, and individual adult and pediatric patients exhibit excellent and average speech perception with hearing aids and cochlear implants. We discuss a framework in which changes in cortical resource allocation secondary to hearing loss results in decreased intra-modal plasticity in auditory cortex, accompanied by increased cross-modal recruitment of auditory cortices by the other sensory systems, and simultaneous compensatory activation of frontal cortices. The frontal cortices, as we will discuss, play an important role in mediating cognitive compensation in hearing loss. Given the wide range of variability in behavioral performance following audiological intervention, changes in cortical plasticity may play a valuable role in the prediction of clinical outcomes following intervention. Further, the development of new technologies and rehabilitation strategies that incorporate brain-based biomarkers may help better serve hearing impaired populations across the lifespan. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Auditory perception modulated by word reading.

    PubMed

    Cao, Liyu; Klepp, Anne; Schnitzler, Alfons; Gross, Joachim; Biermann-Ruben, Katja

    2016-10-01

    Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain.

  8. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders

    PubMed Central

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuro-imaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Learning outcomes Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD). PMID:19419735

  9. LAMP: 100+ Systematic Exercise Lessons for Developing Linguistic Auditory Memory Patterns in Beginning Readers.

    ERIC Educational Resources Information Center

    Valett, Robert E.

    Research findings on auditory sequencing and auditory blending and fusion, auditory-visual integration, and language patterns are presented in support of the Linguistic Auditory Memory Patterns (LAMP) program. LAMP consists of 100 developmental lessons for young students with learning disabilities or language problems. The lessons are included in…

  10. Maturation of Visual and Auditory Temporal Processing in School-Aged Children

    ERIC Educational Resources Information Center

    Dawes, Piers; Bishop, Dorothy V. M.

    2008-01-01

    Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication. Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6-10 years of age. Auditory processes…

  11. Macrophage-Mediated Glial Cell Elimination in the Postnatal Mouse Cochlea

    PubMed Central

    Brown, LaShardai N.; Xing, Yazhi; Noble, Kenyaria V.; Barth, Jeremy L.; Panganiban, Clarisse H.; Smythe, Nancy M.; Bridges, Mary C.; Zhu, Juhong; Lang, Hainan

    2017-01-01

    Hearing relies on the transmission of auditory information from sensory hair cells (HCs) to the brain through the auditory nerve. This relay of information requires HCs to be innervated by spiral ganglion neurons (SGNs) in an exclusive manner and SGNs to be ensheathed by myelinating and non-myelinating glial cells. In the developing auditory nerve, mistargeted SGN axons are retracted or pruned and excessive cells are cleared in a process referred to as nerve refinement. Whether auditory glial cells are eliminated during auditory nerve refinement is unknown. Using early postnatal mice of either sex, we show that glial cell numbers decrease after the first postnatal week, corresponding temporally with nerve refinement in the developing auditory nerve. Additionally, expression of immune-related genes was upregulated and macrophage numbers increase in a manner coinciding with the reduction of glial cell numbers. Transient depletion of macrophages during early auditory nerve development, using transgenic CD11bDTR/EGFP mice, resulted in the appearance of excessive glial cells. Macrophage depletion caused abnormalities in myelin formation and transient edema of the stria vascularis. Macrophage-depleted mice also showed auditory function impairment that partially recovered in adulthood. These findings demonstrate that macrophages contribute to the regulation of glial cell number during postnatal development of the cochlea and that glial cells play a critical role in hearing onset and auditory nerve maturation. PMID:29375297

  12. Sensorimotor nucleus NIf is necessary for auditory processing but not vocal motor output in the avian song system.

    PubMed

    Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F

    2005-04-01

    Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.

  13. Association between central auditory processing mechanism and cardiac autonomic regulation

    PubMed Central

    2014-01-01

    Background This study was conducted to describe the association between central auditory processing mechanism and the cardiac autonomic regulation. Methods It was researched papers on the topic addressed in this study considering the following data bases: Medline, Pubmed, Lilacs, Scopus and Cochrane. The key words were: “auditory stimulation, heart rate, autonomic nervous system and P300”. Results The findings in the literature demonstrated that auditory stimulation influences the autonomic nervous system and has been used in conjunction with other methods. It is considered a promising step in the investigation of therapeutic procedures for rehabilitation and quality of life of several pathologies. Conclusion The association between auditory stimulation and the level of the cardiac autonomic nervous system has received significant contributions in relation to musical stimuli. PMID:24834128

  14. Sustained Perceptual Deficits from Transient Sensory Deprivation

    PubMed Central

    Sanes, Dan H.

    2015-01-01

    Sensory pathways display heightened plasticity during development, yet the perceptual consequences of early experience are generally assessed in adulthood. This approach does not allow one to identify transient perceptual changes that may be linked to the central plasticity observed in juvenile animals. Here, we determined whether a brief period of bilateral auditory deprivation affects sound perception in developing and adult gerbils. Animals were reared with bilateral earplugs, either from postnatal day 11 (P11) to postnatal day 23 (P23) (a manipulation previously found to disrupt gerbil cortical properties), or from P23-P35. Fifteen days after earplug removal and restoration of normal thresholds, animals were tested on their ability to detect the presence of amplitude modulation (AM), a temporal cue that supports vocal communication. Animals reared with earplugs from P11-P23 displayed elevated AM detection thresholds, compared with age-matched controls. In contrast, an identical period of earplug rearing at a later age (P23-P35) did not impair auditory perception. Although the AM thresholds of earplug-reared juveniles improved during a week of repeated testing, a subset of juveniles continued to display a perceptual deficit. Furthermore, although the perceptual deficits induced by transient earplug rearing had resolved for most animals by adulthood, a subset of adults displayed impaired performance. Control experiments indicated that earplugging did not disrupt the integrity of the auditory periphery. Together, our results suggest that P11-P23 encompasses a critical period during which sensory deprivation disrupts central mechanisms that support auditory perceptual skills. SIGNIFICANCE STATEMENT Sensory systems are particularly malleable during development. This heightened degree of plasticity is beneficial because it enables the acquisition of complex skills, such as music or language. However, this plasticity comes with a cost: nervous system development displays an increased vulnerability to the sensory environment. Here, we identify a precise developmental window during which mild hearing loss affects the maturation of an auditory perceptual cue that is known to support animal communication, including human speech. Furthermore, animals reared with transient hearing loss display deficits in perceptual learning. Our results suggest that speech and language delays associated with transient or permanent childhood hearing loss may be accounted for, in part, by deficits in central auditory processing mechanisms. PMID:26224865

  15. Development of a wireless system for auditory neuroscience.

    PubMed

    Lukes, A J; Lear, A T; Snider, R K

    2001-01-01

    In order to study how the auditory cortex extracts communication sounds in a realistic acoustic environment, a wireless system is being developed that will transmit acoustic as well as neural signals. The miniature transmitter will be capable of transmitting two acoustic signals with 37.5 KHz bandwidths (75 KHz sample rate) and 56 neural signals with bandwidths of 9.375 KHz (18.75 KHz sample rate). These signals will be time-division multiplexed into one high bandwidth signal with a 1.2 MHz sample rate. This high bandwidth signal will then be frequency modulated onto a 2.4 GHz carrier, which resides in the industrial, scientic, and medical (ISM) band that is designed for low-power short-range wireless applications. On the receiver side, the signal will be demodulated from the 2.4 GHz carrier and then digitized by an analog-to-digital (A/D) converter. The acoustic and neural signals will be digitally demultiplexed from the multiplexed signal into their respective channels. Oversampling (20 MHz) will allow the reconstruction of the multiplexing clock by a digital signal processor (DSP) that will perform frame and bit synchronization. A frame is a subset of the signal that contains all the channels and several channels tied high and low will signal the start of a frame. This technological development will bring two benefits to auditory neuroscience. It will allow simultaneous recording of many neurons that will permit studies of population codes. It will also allow neural functions to be determined in higher auditory areas by correlating neural and acoustic signals without apriori knowledge of the necessary stimuli.

  16. Calcium-Induced Calcium Release during Action Potential Firing in Developing Inner Hair Cells

    PubMed Central

    Iosub, Radu; Avitabile, Daniele; Grant, Lisa; Tsaneva-Atanasova, Krasimira; Kennedy, Helen J.

    2015-01-01

    In the mature auditory system, inner hair cells (IHCs) convert sound-induced vibrations into electrical signals that are relayed to the central nervous system via auditory afferents. Before the cochlea can respond to normal sound levels, developing IHCs fire calcium-based action potentials that disappear close to the onset of hearing. Action potential firing triggers transmitter release from the immature IHC that in turn generates experience-independent firing in auditory neurons. These early signaling events are thought to be essential for the organization and development of the auditory system and hair cells. A critical component of the action potential is the rise in intracellular calcium that activates both small conductance potassium channels essential during membrane repolarization, and triggers transmitter release from the cell. Whether this calcium signal is generated by calcium influx or requires calcium-induced calcium release (CICR) is not yet known. IHCs can generate CICR, but to date its physiological role has remained unclear. Here, we used high and low concentrations of ryanodine to block or enhance CICR to determine whether calcium release from intracellular stores affected action potential waveform, interspike interval, or changes in membrane capacitance during development of mouse IHCs. Blocking CICR resulted in mixed action potential waveforms with both brief and prolonged oscillations in membrane potential and intracellular calcium. This mixed behavior is captured well by our mathematical model of IHC electrical activity. We perform two-parameter bifurcation analysis of the model that predicts the dependence of IHCs firing patterns on the level of activation of two parameters, the SK2 channels activation and CICR rate. Our data show that CICR forms an important component of the calcium signal that shapes action potentials and regulates firing patterns, but is not involved directly in triggering exocytosis. These data provide important insights into the calcium signaling mechanisms involved in early developmental processes. PMID:25762313

  17. Calcium-Induced calcium release during action potential firing in developing inner hair cells.

    PubMed

    Iosub, Radu; Avitabile, Daniele; Grant, Lisa; Tsaneva-Atanasova, Krasimira; Kennedy, Helen J

    2015-03-10

    In the mature auditory system, inner hair cells (IHCs) convert sound-induced vibrations into electrical signals that are relayed to the central nervous system via auditory afferents. Before the cochlea can respond to normal sound levels, developing IHCs fire calcium-based action potentials that disappear close to the onset of hearing. Action potential firing triggers transmitter release from the immature IHC that in turn generates experience-independent firing in auditory neurons. These early signaling events are thought to be essential for the organization and development of the auditory system and hair cells. A critical component of the action potential is the rise in intracellular calcium that activates both small conductance potassium channels essential during membrane repolarization, and triggers transmitter release from the cell. Whether this calcium signal is generated by calcium influx or requires calcium-induced calcium release (CICR) is not yet known. IHCs can generate CICR, but to date its physiological role has remained unclear. Here, we used high and low concentrations of ryanodine to block or enhance CICR to determine whether calcium release from intracellular stores affected action potential waveform, interspike interval, or changes in membrane capacitance during development of mouse IHCs. Blocking CICR resulted in mixed action potential waveforms with both brief and prolonged oscillations in membrane potential and intracellular calcium. This mixed behavior is captured well by our mathematical model of IHC electrical activity. We perform two-parameter bifurcation analysis of the model that predicts the dependence of IHCs firing patterns on the level of activation of two parameters, the SK2 channels activation and CICR rate. Our data show that CICR forms an important component of the calcium signal that shapes action potentials and regulates firing patterns, but is not involved directly in triggering exocytosis. These data provide important insights into the calcium signaling mechanisms involved in early developmental processes. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  18. Transcriptional maturation of the mouse auditory forebrain.

    PubMed

    Hackett, Troy A; Guo, Yan; Clause, Amanda; Hackett, Nicholas J; Garbett, Krassimira; Zhang, Pan; Polley, Daniel B; Mirnics, Karoly

    2015-08-14

    The maturation of the brain involves the coordinated expression of thousands of genes, proteins and regulatory elements over time. In sensory pathways, gene expression profiles are modified by age and sensory experience in a manner that differs between brain regions and cell types. In the auditory system of altricial animals, neuronal activity increases markedly after the opening of the ear canals, initiating events that culminate in the maturation of auditory circuitry in the brain. This window provides a unique opportunity to study how gene expression patterns are modified by the onset of sensory experience through maturity. As a tool for capturing these features, next-generation sequencing of total RNA (RNAseq) has tremendous utility, because the entire transcriptome can be screened to index expression of any gene. To date, whole transcriptome profiles have not been generated for any central auditory structure in any species at any age. In the present study, RNAseq was used to profile two regions of the mouse auditory forebrain (A1, primary auditory cortex; MG, medial geniculate) at key stages of postnatal development (P7, P14, P21, adult) before and after the onset of hearing (~P12). Hierarchical clustering, differential expression, and functional geneset enrichment analyses (GSEA) were used to profile the expression patterns of all genes. Selected genesets related to neurotransmission, developmental plasticity, critical periods and brain structure were highlighted. An accessible repository of the entire dataset was also constructed that permits extraction and screening of all data from the global through single-gene levels. To our knowledge, this is the first whole transcriptome sequencing study of the forebrain of any mammalian sensory system. Although the data are most relevant for the auditory system, they are generally applicable to forebrain structures in the visual and somatosensory systems, as well. The main findings were: (1) Global gene expression patterns were tightly clustered by postnatal age and brain region; (2) comparing A1 and MG, the total numbers of differentially expressed genes were comparable from P7 to P21, then dropped to nearly half by adulthood; (3) comparing successive age groups, the greatest numbers of differentially expressed genes were found between P7 and P14 in both regions, followed by a steady decline in numbers with age; (4) maturational trajectories in expression levels varied at the single gene level (increasing, decreasing, static, other); (5) between regions, the profiles of single genes were often asymmetric; (6) GSEA revealed that genesets related to neural activity and plasticity were typically upregulated from P7 to adult, while those related to structure tended to be downregulated; (7) GSEA and pathways analysis of selected functional networks were not predictive of expression patterns in the auditory forebrain for all genes, reflecting regional specificity at the single gene level. Gene expression in the auditory forebrain during postnatal development is in constant flux and becomes increasingly stable with age. Maturational changes are evident at the global through single gene levels. Transcriptome profiles in A1 and MG are distinct at all ages, and differ from other brain regions. The database generated by this study provides a rich foundation for the identification of novel developmental biomarkers, functional gene pathways, and targeted studies of postnatal maturation in the auditory forebrain.

  19. The harmonic organization of auditory cortex.

    PubMed

    Wang, Xiaoqin

    2013-12-17

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

  20. Sensory hair cell development and regeneration: similarities and differences

    PubMed Central

    Atkinson, Patrick J.; Huarcaya Najarro, Elvis; Sayyid, Zahra N.; Cheng, Alan G.

    2015-01-01

    Sensory hair cells are mechanoreceptors of the auditory and vestibular systems and are crucial for hearing and balance. In adult mammals, auditory hair cells are unable to regenerate, and damage to these cells results in permanent hearing loss. By contrast, hair cells in the chick cochlea and the zebrafish lateral line are able to regenerate, prompting studies into the signaling pathways, morphogen gradients and transcription factors that regulate hair cell development and regeneration in various species. Here, we review these findings and discuss how various signaling pathways and factors function to modulate sensory hair cell development and regeneration. By comparing and contrasting development and regeneration, we also highlight the utility and limitations of using defined developmental cues to drive mammalian hair cell regeneration. PMID:25922522

  1. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  2. Neurotoxicity of trimethyltin in rat cochlear organotypic cultures

    PubMed Central

    Yu, Jintao; Ding, Dalian; Sun, Hong; Salvi, Richard; Roth, Jerome A.

    2015-01-01

    Trimethyltin (TMT), which has a variety of applications in industry and agricultural is a neurotoxin that is known to affect the auditory system as well as central nervous system (CNS) of humans and experimental animals. However, the mechanisms underlying TMT-induced auditory dysfunction are poorly understood. To gain insights into the neurotoxic effect of TMT on the peripheral auditory system, we treated cochlear organotypic cultures with concentrations of TMT ranging from 5 to 100 μM for 24 h. Interestingly, TMT preferentially damaged auditory nerve fibers and spiral ganglion neurons in a dose-dependent manner, but had no noticeable effects on the sensory hair cells at the doses employed. TMT-induced damage to auditory neurons was associated with significant soma shrinkage, nuclear condensation and activation of caspase-3, biomarkers indicative of apoptotic cell death. Our findings show that TMT is exclusively neurotoxicity in rat cochlear organotypic culture and that TMT-induced auditory neuron death occurs through a caspase-mediated apoptotic pathway. PMID:25957118

  3. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    PubMed

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  5. Applications of psychophysical models to the study of auditory development

    NASA Astrophysics Data System (ADS)

    Werner, Lynne

    2003-04-01

    Psychophysical models of listening, such as the energy detector model, have provided a framework from which to characterize the function of the mature auditory system and to explore how mature listeners make use of auditory information in sound identification. The application of such models to the study of auditory development has similarly provided insight into the characteristics of infant hearing and listening. Infants intensity, frequency, temporal and spatial resolution have been described at least grossly and some contributions of immature listening strategies to infant hearing have been identified. Infants psychoacoustic performance is typically poorer than adults under identical stimulus conditions. However, the infant's performance typically varies with stimulus condition in a way that is qualitatively similar to the adult's performance. In some cases, though, infants perform in a qualitatively different way from adults in psychoacoustic experiments. Further, recent psychoacoustic studies of children suggest that the classic models of listening may be inadequate to describe the children's performance. The characteristics of a model that might be appropriate for the immature listener will be outlined and the implications for models of mature listening will be discussed. [Work supported by NIH grants DC00396 and by DC04661.

  6. Linking prenatal experience to the emerging musical mind.

    PubMed

    Ullal-Gupta, Sangeeta; Vanden Bosch der Nederlanden, Christina M; Tichko, Parker; Lahav, Amir; Hannon, Erin E

    2013-09-03

    The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.

  7. Natural and lesion-induced decrease in cell proliferation in the medial nucleus of the trapezoid body during hearing development.

    PubMed

    Saliu, Aminat; Adise, Shana; Xian, Sandy; Kudelska, Kamila; Rodríguez-Contreras, Adrián

    2014-04-01

    The functional interactions between neurons and glial cells that are important for nervous system function are presumably established during development from the activity of progenitor cells. In this study we examined proliferation of progenitor cells in the medial nucleus of the trapezoid body (MNTB) located in the rat auditory brainstem. We performed DNA synthesis labeling experiments to demonstrate changes in cell proliferation activity during postnatal stages of development. An increase in cell proliferation correlated with MNTB growth and the presence of S100β-positive astrocytes among MNTB neurons. In additional experiments we analyzed the fate of newly born cells. At perinatal ages, newly born cells colabeled with the astrocyte marker S100β in higher numbers than when cells were generated at postnatal day 6. Furthermore, we identified newly born cells that were colabeled with caspase-3 immunohistochemistry and performed comparative experiments to demonstrate that there is a natural decrease in cell proliferation activity during postnatal development in rats, mice, gerbils, and ferrets. Lastly, we found that there is a stronger decrease in MNTB cell proliferation after performing bilateral lesions of the auditory periphery in rats. Altogether, these results identify important stages in the development of astrocytes in the MNTB and provide evidence that the proliferative activity of the progenitor cells is developmentally regulated. We propose that the developmental reduction in cell proliferation may reflect coordinated signaling between the auditory brainstem and the auditory periphery. Copyright © 2013 The Authors. Wiley Periodicals, Inc.

  8. An Inexpensive Group FM Amplification System for the Classroom.

    ERIC Educational Resources Information Center

    Worner, William A.

    1988-01-01

    An inexpensive FM amplification system was developed to enhance auditory learning in classrooms for the hearing impaired. Evaluation indicated that the system equalizes the sound pressure level throughout the room, with the increased sound pressure level falling in the range of 70 to 73 decibels. (Author/DB)

  9. Neuro-Linguistics Programming: Developing Effective Communication in the Classroom.

    ERIC Educational Resources Information Center

    Torres, Cresencio; Katz, Judy H.

    1983-01-01

    Students and teachers experience the world primarily through visual, kinesthetic, or auditory representational systems. If teachers are aware of their own favored system and those of their students, classroom communication will improve. Neurolinguistic programing can help teachers become more effective communicators. (PP)

  10. Alpha Rhythms in Audition: Cognitive and Clinical Perspectives

    PubMed Central

    Weisz, Nathan; Hartmann, Thomas; Müller, Nadia; Lorenz, Isabel; Obleser, Jonas

    2011-01-01

    Like the visual and the sensorimotor systems, the auditory system exhibits pronounced alpha-like resting oscillatory activity. Due to the relatively small spatial extent of auditory cortical areas, this rhythmic activity is less obvious and frequently masked by non-auditory alpha-generators when recording non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). Following stimulation with sounds, marked desynchronizations can be observed between 6 and 12 Hz, which can be localized to the auditory cortex. However knowledge about the functional relevance of the auditory alpha rhythm has remained scarce so far. Results from the visual and sensorimotor system have fuelled the hypothesis of alpha activity reflecting a state of functional inhibition. The current article pursues several intentions: (1) Firstly we review and present own evidence (MEG, EEG, sEEG) for the existence of an auditory alpha-like rhythm independent of visual or motor generators, something that is occasionally met with skepticism. (2) In a second part we will discuss tinnitus and how this audiological symptom may relate to reduced background alpha. The clinical part will give an introduction into a method which aims to modulate neurophysiological activity hypothesized to underlie this distressing disorder. Using neurofeedback, one is able to directly target relevant oscillatory activity. Preliminary data point to a high potential of this approach for treating tinnitus. (3) Finally, in a cognitive neuroscientific part we will show that auditory alpha is modulated by anticipation/expectations with and without auditory stimulation. We will also introduce ideas and initial evidence that alpha oscillations are involved in the most complex capability of the auditory system, namely speech perception. The evidence presented in this article corroborates findings from other modalities, indicating that alpha-like activity functionally has an universal inhibitory role across sensory modalities. PMID:21687444

  11. Changes in Properties of Auditory Nerve Synapses following Conductive Hearing Loss.

    PubMed

    Zhuang, Xiaowen; Sun, Wei; Xu-Friedman, Matthew A

    2017-01-11

    Auditory activity plays an important role in the development of the auditory system. Decreased activity can result from conductive hearing loss (CHL) associated with otitis media, which may lead to long-term perceptual deficits. The effects of CHL have been mainly studied at later stages of the auditory pathway, but early stages remain less examined. However, changes in early stages could be important because they would affect how information about sounds is conveyed to higher-order areas for further processing and localization. We examined the effects of CHL at auditory nerve synapses onto bushy cells in the mouse anteroventral cochlear nucleus following occlusion of the ear canal. These synapses, called endbulbs of Held, normally show strong depression in voltage-clamp recordings in brain slices. After 1 week of CHL, endbulbs showed even greater depression, reflecting higher release probability. We observed no differences in quantal size between control and occluded mice. We confirmed these observations using mean-variance analysis and the integration method, which also revealed that the number of release sites decreased after occlusion. Consistent with this, synaptic puncta immunopositive for VGLUT1 decreased in area after occlusion. The level of depression and number of release sites both showed recovery after returning to normal conditions. Finally, bushy cells fired fewer action potentials in response to evoked synaptic activity after occlusion, likely because of increased depression and decreased input resistance. These effects appear to reflect a homeostatic, adaptive response of auditory nerve synapses to reduced activity. These effects may have important implications for perceptual changes following CHL. Normal hearing is important to everyday life, but abnormal auditory experience during development can lead to processing disorders. For example, otitis media reduces sound to the ear, which can cause long-lasting deficits in language skills and verbal production, but the location of the problem is unknown. Here, we show that occluding the ear causes synapses at the very first stage of the auditory pathway to modify their properties, by decreasing in size and increasing the likelihood of releasing neurotransmitter. This causes synapses to deplete faster, which reduces fidelity at central targets of the auditory nerve, which could affect perception. Temporary hearing loss could cause similar changes at later stages of the auditory pathway, which could contribute to disorders in behavior. Copyright © 2017 the authors 0270-6474/17/370323-10$15.00/0.

  12. Extrinsic Embryonic Sensory Stimulation Alters Multimodal Behavior and Cellular Activation

    PubMed Central

    Markham, Rebecca G.; Shimizu, Toru; Lickliter, Robert

    2009-01-01

    Embryonic vision is generated and maintained by spontaneous neuronal activation patterns, yet extrinsic stimulation also sculpts sensory development. Because the sensory and motor systems are interconnected in embryogenesis, how extrinsic sensory activation guides multimodal differentiation is an important topic. Further, it is unknown whether extrinsic stimulation experienced near sensory sensitivity onset contributes to persistent brain changes, ultimately affecting postnatal behavior. To determine the effects of extrinsic stimulation on multimodal development, we delivered auditory stimulation to bobwhite quail groups during early, middle, or late embryogenesis, and then tested postnatal behavioral responsiveness to auditory or visual cues. Auditory preference tendencies were more consistently toward the conspecific stimulus for animals stimulated during late embryogenesis. Groups stimulated during middle or late embryogenesis showed altered postnatal species-typical visual responsiveness, demonstrating a persistent multimodal effect. We also examined whether auditory-related brain regions are receptive to extrinsic input during middle embryogenesis by measuring postnatal cellular activation. Stimulated birds showed a greater number of ZENK-immunopositive cells per unit volume of brain tissue in deep optic tectum, a midbrain region strongly implicated in multimodal function. We observed similar results in the medial and caudomedial nidopallia in the telencephalon. There were no ZENK differences between groups in inferior colliculus or in caudolateral nidopallium, avian analog to prefrontal cortex. To our knowledge, these are the first results linking extrinsic stimulation delivered so early in embryogenesis to changes in postnatal multimodal behavior and cellular activation. The potential role of competitive interactions between the sensory and motor systems is discussed. PMID:18777564

  13. A Technological Review of the Instrumented Footwear for Rehabilitation with a Focus on Parkinson’s Disease Patients

    PubMed Central

    Maculewicz, Justyna; Kofoed, Lise Busk; Serafin, Stefania

    2016-01-01

    In this review article, we summarize systems for gait rehabilitation based on instrumented footwear and present a context of their usage in Parkinson’s disease (PD) patients’ auditory and haptic rehabilitation. We focus on the needs of PD patients, but since only a few systems were made with this purpose, we go through several applications used in different scenarios when gait detection and rehabilitation are considered. We present developments of the designs, possible improvements, and software challenges and requirements. We conclude that in order to build successful systems for PD patients’ gait rehabilitation, technological solutions from several studies have to be applied and combined with knowledge from auditory and haptic cueing. PMID:26834696

  14. Audiological and electrophysiological evaluation of children with acquired immunodeficiency syndrome (AIDS).

    PubMed

    Matas, Carla Gentile; Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Gonçalves, Isabela Crivellaro

    2006-08-01

    We examined the peripheral auditory system and the auditory brainstem pathway of children with Acquired Immunodeficiency Syndrome (AIDS). One hundred and one children, 51 with AIDS diagnosis and 50 normal children were evaluated. Audiological assessment included immittance measures, pure tone and speech audiometry and auditory brainstem response (ABR). The children with AIDS more frequently had abnormal results than did their matched controls, presenting either peripheral or auditory brainstem impairment. We suggest that AIDS be considered a risk factor for peripheral and/or auditory brainstem disorders. Further research should be carried out to investigate the auditory effects of HIV infection along the auditory pathway.

  15. Comparisons of MRI images, and auditory-related and vocal-related protein expressions in the brain of echolocation bats and rodents.

    PubMed

    Hsiao, Chun-Jen; Hsu, Chih-Hsiang; Lin, Ching-Lung; Wu, Chung-Hsin; Jen, Philip Hung-Sun

    2016-08-17

    Although echolocating bats and other mammals share the basic design of laryngeal apparatus for sound production and auditory system for sound reception, they have a specialized laryngeal mechanism for ultrasonic sound emissions as well as a highly developed auditory system for processing species-specific sounds. Because the sounds used by bats for echolocation and rodents for communication are quite different, there must be differences in the central nervous system devoted to producing and processing species-specific sounds between them. The present study examines the difference in the relative size of several brain structures and expression of auditory-related and vocal-related proteins in the central nervous system of echolocation bats and rodents. Here, we report that bats using constant frequency-frequency-modulated sounds (CF-FM bats) and FM bats for echolocation have a larger volume of midbrain nuclei (inferior and superior colliculi) and cerebellum relative to the size of the brain than rodents (mice and rats). However, the former have a smaller volume of the cerebrum and olfactory bulb, but greater expression of otoferlin and forkhead box protein P2 than the latter. Although the size of both midbrain colliculi is comparable in both CF-FM and FM bats, CF-FM bats have a larger cerebrum and greater expression of otoferlin and forkhead box protein P2 than FM bats. These differences in brain structure and protein expression are discussed in relation to their biologically relevant sounds and foraging behavior.

  16. Perspectives on the Pure-Tone Audiogram.

    PubMed

    Musiek, Frank E; Shinn, Jennifer; Chermak, Gail D; Bamiou, Doris-Eva

    The pure-tone audiogram, though fundamental to audiology, presents limitations, especially in the case of central auditory involvement. Advances in auditory neuroscience underscore the considerably larger role of the central auditory nervous system (CANS) in hearing and related disorders. Given the availability of behavioral audiological tests and electrophysiological procedures that can provide better insights as to the function of the various components of the auditory system, this perspective piece reviews the limitations of the pure-tone audiogram and notes some of the advantages of other tests and procedures used in tandem with the pure-tone threshold measurement. To review and synthesize the literature regarding the utility and limitations of the pure-tone audiogram in determining dysfunction of peripheral sensory and neural systems, as well as the CANS, and to identify other tests and procedures that can supplement pure-tone thresholds and provide enhanced diagnostic insight, especially regarding problems of the central auditory system. A systematic review and synthesis of the literature. The authors independently searched and reviewed literature (journal articles, book chapters) pertaining to the limitations of the pure-tone audiogram. The pure-tone audiogram provides information as to hearing sensitivity across a selected frequency range. Normal or near-normal pure-tone thresholds sometimes are observed despite cochlear damage. There are a surprising number of patients with acoustic neuromas who have essentially normal pure-tone thresholds. In cases of central deafness, depressed pure-tone thresholds may not accurately reflect the status of the peripheral auditory system. Listening difficulties are seen in the presence of normal pure-tone thresholds. Suprathreshold procedures and a variety of other tests can provide information regarding other and often more central functions of the auditory system. The audiogram is a primary tool for determining type, degree, and configuration of hearing loss; however, it provides the clinician with information regarding only hearing sensitivity, and no information about central auditory processing or the auditory processing of real-world signals (i.e., speech, music). The pure-tone audiogram offers limited insight into functional hearing and should be viewed only as a test of hearing sensitivity. Given the limitations of the pure-tone audiogram, a brief overview is provided of available behavioral tests and electrophysiological procedures that are sensitive to the function and integrity of the central auditory system, which provide better diagnostic and rehabilitative information to the clinician and patient. American Academy of Audiology

  17. Age-related decline of the cytochrome c oxidase subunit expression in the auditory cortex of the mimetic aging rat model associated with the common deletion.

    PubMed

    Zhong, Yi; Hu, Yujuan; Peng, Wei; Sun, Yu; Yang, Yang; Zhao, Xueyan; Huang, Xiang; Zhang, Honglian; Kong, Weijia

    2012-12-01

    The age-related deterioration in the central auditory system is well known to impair the abilities of sound localization and speech perception. However, the mechanisms involved in the age-related central auditory deficiency remain unclear. Previous studies have demonstrated that mitochondrial DNA (mtDNA) deletions accumulated with age in the auditory system. Also, a cytochrome c oxidase (CcO) deficiency has been proposed to be a causal factor in the age-related decline in mitochondrial respiratory activity. This study was designed to explore the changes of CcO activity and to investigate the possible relationship between the mtDNA common deletion (CD) and CcO activity as well as the mRNA expression of CcO subunits in the auditory cortex of D-galactose (D-gal)-induced mimetic aging rats at different ages. Moreover, we explored whether peroxisome proliferator-activated receptor-γ coactivator 1α (PGC-1α), nuclear respiratory factor 1 (NRF-1) and mitochondrial transcription factor A (TFAM) were involved in the changes of nuclear- and mitochondrial-encoded CcO subunits in the auditory cortex during aging. Our data demonstrated that d-gal-induced mimetic aging rats exhibited an accelerated accumulation of the CD and a gradual decline in the CcO activity in the auditory cortex during the aging process. The reduction in the CcO activity was correlated with the level of CD load in the auditory cortex. The mRNA expression of CcO subunit III was reduced significantly with age in the d-gal-induced mimetic aging rats. In contrast, the decline in the mRNA expression of subunits I and IV was relatively minor. Additionally, significant increases in the mRNA and protein levels of PGC-1α, NRF-1 and TFAM were observed in the auditory cortex of D-gal-induced mimetic aging rats with aging. These findings suggested that the accelerated accumulation of the CD in the auditory cortex may induce a substantial decline in CcO subunit III and lead to a significant decline in the CcO activity progressively with age despite compensatory increases of PGC-1α, NRF-1 and TFAM. Therefore, CcO may be a specific intramitochondrial site of age-related deterioration in the auditory cortex, and CcO subunit III might be a target in the development of presbycusis. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.

    PubMed

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.

  19. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children

    PubMed Central

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442

  20. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  1. Medial Auditory Thalamus Is Necessary for Acquisition and Retention of Eyeblink Conditioning to Cochlear Nucleus Stimulation

    ERIC Educational Resources Information Center

    Halverson, Hunter E.; Poremba, Amy; Freeman, John H.

    2015-01-01

    Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…

  2. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  3. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  4. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  5. Sensory Coding and Sensitivity to Local Estrogens Shift during Critical Period Milestones in the Auditory Cortex of Male Songbirds.

    PubMed

    Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke

    2017-01-01

    Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.

  6. Sensory Coding and Sensitivity to Local Estrogens Shift during Critical Period Milestones in the Auditory Cortex of Male Songbirds

    PubMed Central

    2017-01-01

    Abstract Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor’s song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM’s established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches (Taeniopygia guttata) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E2 administration on sensory processing. In sensory-aged subjects, E2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E2 sensitivity that each precisely track a key neural “switch point” from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds. PMID:29255797

  7. Auditory pathways: are 'what' and 'where' appropriate?

    PubMed

    Hall, Deborah A

    2003-05-13

    New evidence confirms that the auditory system encompasses temporal, parietal and frontal brain regions, some of which partly overlap with the visual system. But common assumptions about the functional homologies between sensory systems may be misleading.

  8. Involvement of the human midbrain and thalamus in auditory deviance detection.

    PubMed

    Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César

    2015-02-01

    Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Relative size of auditory pathways in symmetrically and asymmetrically eared owls.

    PubMed

    Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R

    2011-01-01

    Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.

  10. Auditory connections and functions of prefrontal cortex

    PubMed Central

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  11. An Investigation of Spatial Hearing in Children with Normal Hearing and with Cochlear Implants and the Impact of Executive Function

    NASA Astrophysics Data System (ADS)

    Misurelli, Sara M.

    The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.

  12. Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.

  13. Auditory Cortical Processing in Real-World Listening: The Auditory System Going Real

    PubMed Central

    Bizley, Jennifer; Shamma, Shihab A.; Wang, Xiaoqin

    2014-01-01

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. PMID:25392481

  14. Auditory cortical processing in real-world listening: the auditory system going real.

    PubMed

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  15. Anatomy of the auditory thalamocortical system in the Mongolian gerbil: nuclear origins and cortical field-, layer-, and frequency-specificities.

    PubMed

    Saldeitis, Katja; Happel, Max F K; Ohl, Frank W; Scheich, Henning; Budinger, Eike

    2014-07-01

    Knowledge of the anatomical organization of the auditory thalamocortical (TC) system is fundamental for the understanding of auditory information processing in the brain. In the Mongolian gerbil (Meriones unguiculatus), a valuable model species in auditory research, the detailed anatomy of this system has not yet been worked out in detail. Here, we investigated the projections from the three subnuclei of the medial geniculate body (MGB), namely, its ventral (MGv), dorsal (MGd), and medial (MGm) divisions, as well as from several of their subdivisions (MGv: pars lateralis [LV], pars ovoidea [OV], rostral pole [RP]; MGd: deep dorsal nucleus [DD]), to the auditory cortex (AC) by stereotaxic pressure injections and electrophysiologically guided iontophoretic injections of the anterograde tract tracer biocytin. Our data reveal highly specific features of the TC connections regarding their nuclear origin in the subdivisions of the MGB and their termination patterns in the auditory cortical fields and layers. In addition to tonotopically organized projections, primarily of the LV, OV, and DD to the AC, a large number of axons diverge across the tonotopic gradient. These originate mainly from the RP, MGd (proper), and MGm. In particular, neurons of the MGm project in a columnar fashion to several auditory fields, forming small- and medium-sized boutons, and also hitherto unknown giant terminals. The distinctive layer-specific distribution of axonal endings within the AC indicates that each of the TC connectivity systems has a specific function in auditory cortical processing. Copyright © 2014 Wiley Periodicals, Inc.

  16. Investigation of the neurological correlates of information reception

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Animals trained to respond to a given pattern of electrical stimuli applied to pathways or centers of the auditory nervous system respond also to certain patterns of acoustic stimuli without additional training. Likewise, only certain electrical stimuli elicit responses after training to a given acoustic signal. In most instances, if a response has been learned to a given electrical stimulus applied to one center of the auditory nervous system, the same stimulus applied to another auditory center at either a higher or lower level will also elicit the response. This kind of transfer of response does not take place when a stimulus is applied through electrodes implanted in neural tissue outside of the auditory system.

  17. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  18. Experience and information loss in auditory and visual memory.

    PubMed

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  19. Differential Diagnosis of Speech Sound Disorder (Phonological Disorder): Audiological Assessment beyond the Pure-tone Audiogram.

    PubMed

    Iliadou, Vasiliki Vivian; Chermak, Gail D; Bamiou, Doris-Eva

    2015-04-01

    According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, diagnosis of speech sound disorder (SSD) requires a determination that it is not the result of other congenital or acquired conditions, including hearing loss or neurological conditions that may present with similar symptomatology. To examine peripheral and central auditory function for the purpose of determining whether a peripheral or central auditory disorder was an underlying factor or contributed to the child's SSD. Central auditory processing disorder clinic pediatric case reports. Three clinical cases are reviewed of children with diagnosed SSD who were referred for audiological evaluation by their speech-language pathologists as a result of slower than expected progress in therapy. Audiological testing revealed auditory deficits involving peripheral auditory function or the central auditory nervous system. These cases demonstrate the importance of increasing awareness among professionals of the need to fully evaluate the auditory system to identify auditory deficits that could contribute to a patient's speech sound (phonological) disorder. Audiological assessment in cases of suspected SSD should not be limited to pure-tone audiometry given its limitations in revealing the full range of peripheral and central auditory deficits, deficits which can compromise treatment of SSD. American Academy of Audiology.

  20. Intensity-invariant coding in the auditory system.

    PubMed

    Barbour, Dennis L

    2011-11-01

    The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Crossmodal association of auditory and visual material properties in infants.

    PubMed

    Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K

    2018-06-18

    The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.

  2. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    PubMed Central

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  3. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  4. Development of auditory sensitivity in budgerigars (Melopsittacus undulatus)

    NASA Astrophysics Data System (ADS)

    Brittan-Powell, Elizabeth F.; Dooling, Robert J.

    2004-06-01

    Auditory feedback influences the development of vocalizations in songbirds and parrots; however, little is known about the development of hearing in these birds. The auditory brainstem response was used to track the development of auditory sensitivity in budgerigars from hatch to 6 weeks of age. Responses were first obtained from 1-week-old at high stimulation levels at frequencies at or below 2 kHz, showing that budgerigars do not hear well at hatch. Over the next week, thresholds improved markedly, and responses were obtained for almost all test frequencies throughout the range of hearing by 14 days. By 3 weeks posthatch, birds' best sensitivity shifted from 2 to 2.86 kHz, and the shape of the auditory brainstem response (ABR) audiogram became similar to that of adult budgerigars. About a week before leaving the nest, ABR audiograms of young budgerigars are very similar to those of adult birds. These data complement what is known about vocal development in budgerigars and show that hearing is fully developed by the time that vocal learning begins.

  5. Longitudinal Comparison of Auditory Steady-State Evoked Potentials in Preterm and Term Infants: The Maturation Process

    PubMed Central

    Sousa, Ana Constantino; Didoné, Dayane Domeneghini; Sleifer, Pricila

    2017-01-01

    Introduction  Preterm neonates are at risk of changes in their auditory system development, which explains the need for auditory monitoring of this population. The Auditory Steady-State Response (ASSR) is an objective method that allows obtaining the electrophysiological thresholds with greater applicability in neonatal and pediatric population. Objective  The purpose of this study is to compare the ASSR thresholds in preterm and term infants evaluated during two stages. Method  The study included 63 normal hearing neonates: 33 preterm and 30 term. They underwent assessment of ASSR in both ears simultaneously through insert phones in the frequencies of 500 to 4000Hz with the amplitude modulated from 77 to 103Hz. We presented the intensity at a decreasing level to detect the minimum level of responses. At 18 months, 26 of 33 preterm infants returned for the new assessment for ASSR and were compared with 30 full-term infants. We compared between groups according to gestational age. Results  Electrophysiological thresholds were higher in preterm than in full-term neonates ( p  < 0.05) at the first testing. There were no significant differences between ears and gender. At 18 months, there was no difference between groups ( p  > 0.05) in all the variables described. Conclusion  In the first evaluation preterm had higher thresholds in ASSR. There was no difference at 18 months of age, showing the auditory maturation of preterm infants throughout their development. PMID:28680486

  6. Acetylcholinesterase Inhibition and Information Processing in the Auditory Cortex

    DTIC Science & Technology

    1986-04-30

    9,24,29,30), or for causing auditory hallucinations (2,23,31,32). Thus, compounds which alter cho- linergic transmission, in particular anticholinesterases...the upper auditory system. Thus, attending to and understanding verbal messages in humans, irrespective of the particular voice which speaks them, may...00, AD ACETYLCHOLINESTERASE INHIBITION AND INFORMATION PROCESSING IN THE AUDITORY CORTEX ANNUAL SUMMARY REPORT DTIC ELECTENORMAN M

  7. The harmonic organization of auditory cortex

    PubMed Central

    Wang, Xiaoqin

    2013-01-01

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544

  8. Knockout Mice for Dyslexia Susceptibility Gene Homologs KIAA0319 and KIAA0319L have Unaffected Neuronal Migration but Display Abnormal Auditory Processing

    PubMed Central

    Guidi, Luiz G; Mattley, Jane; Martinez-Garay, Isabel; Monaco, Anthony P; Linden, Jennifer F; Velayos-Baeza, Antonio

    2017-01-01

    Abstract Developmental dyslexia is a neurodevelopmental disorder that affects reading ability caused by genetic and non-genetic factors. Amongst the susceptibility genes identified to date, KIAA0319 is a prime candidate. RNA-interference experiments in rats suggested its involvement in cortical migration but we could not confirm these findings in Kiaa0319-mutant mice. Given its homologous gene Kiaa0319L (AU040320) has also been proposed to play a role in neuronal migration, we interrogated whether absence of AU040320 alone or together with KIAA0319 affects migration in the developing brain. Analyses of AU040320 and double Kiaa0319;AU040320 knockouts (dKO) revealed no evidence for impaired cortical lamination, neuronal migration, neurogenesis or other anatomical abnormalities. However, dKO mice displayed an auditory deficit in a behavioral gap-in-noise detection task. In addition, recordings of click-evoked auditory brainstem responses revealed suprathreshold deficits in wave III amplitude in AU040320-KO mice, and more general deficits in dKOs. These findings suggest that absence of AU040320 disrupts firing and/or synchrony of activity in the auditory brainstem, while loss of both proteins might affect both peripheral and central auditory function. Overall, these results stand against the proposed role of KIAA0319 and AU040320 in neuronal migration and outline their relationship with deficits in the auditory system. PMID:29045729

  9. Multimedia-assisted breathwalk-aware system.

    PubMed

    Yu, Meng-Chieh; Wu, Huan; Lee, Ming-Sui; Hung, Yi-Ping

    2012-12-01

    Breathwalk is a science of combining specific patterns of footsteps synchronized with the breathing. In this study, we developed a multimedia-assisted Breathwalk-aware system which detects user's walking and breathing conditions and provides appropriate multimedia guidance on the smartphone. Through the mobile device, the system enhances user's awareness of walking and breathing behaviors. As an example application in slow technology, the system could help meditator beginners learn "walking meditation," a type of meditation which aims to be as slow as possible in taking pace, to synchronize footstep with breathing, and to land every footstep with toes first. In the pilot study, we developed a walking-aware system and evaluated whether multimedia-assisted mechanism is capable of enhancing beginner's walking awareness while walking meditation. Experimental results show that it could effectively assist beginners in slowing down the walking speed and decreasing incorrect footsteps. In the second experiment, we evaluated the Breathwalk-aware system to find a better feedback mechanism for learning the techniques of Breathwalk while walking meditation. The experimental results show that the visual-auditory mechanism is a better multimedia-assisted mechanism while walking meditation than visual mechanism and auditory mechanism.

  10. The maturation state of the auditory nerve and brainstem in rats exposed to lead acetate and supplemented with ferrous sulfate.

    PubMed

    Zucki, Fernanda; Morata, Thais C; Duarte, Josilene L; Ferreira, Maria Cecília F; Salgado, Manoel H; Alvarenga, Kátia F

    The literature has reported the association between lead and auditory effects, based on clinical and experimental studies. However, there is no consensus regarding the effects of lead in the auditory system, or its correlation with the concentration of the metal in the blood. To investigate the maturation state of the auditory system, specifically the auditory nerve and brainstem, in rats exposed to lead acetate and supplemented with ferrous sulfate. 30 weanling male rats (Rattus norvegicus, Wistar) were distributed into six groups of five animals each and exposed to one of two concentrations of lead acetate (100 or 400mg/L) and supplemented with ferrous sulfate (20mg/kg). The maturation state of the auditory nerve and brainstem was analyzed using Brainstem Auditory Evoked Potential before and after lead exposure. The concentration of lead in blood and brainstem was analyzed using Inductively Coupled Plasma-Mass Spectrometry. We verified that the concentration of Pb in blood and in brainstem presented a high correlation (r=0.951; p<0.0001). Both concentrations of lead acetate affected the maturation state of the auditory system, being the maturation slower in the regions corresponding to portion of the auditory nerve (wave I) and cochlear nuclei (wave II). The ferrous sulfate supplementation reduced significantly the concentration of lead in blood and brainstem for the group exposed to the lowest concentration of lead (100mg/L), but not for the group exposed to the higher concentration (400mg/L). This study indicate that the lead acetate can have deleterious effects on the maturation of the auditory nerve and brainstem (cochlear nucleus region), as detected by the Brainstem Auditory Evoked Potentials, and the ferrous sulphate can partially amend this effect. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. All rights reserved.

  11. Different mechanisms are responsible for dishabituation of electrophysiological auditory responses to a change in acoustic identity than to a change in stimulus location.

    PubMed

    Smulders, Tom V; Jarvis, Erich D

    2013-11-01

    Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  13. Neurotrophic factor intervention restores auditory function in deafened animals

    NASA Astrophysics Data System (ADS)

    Shinohara, Takayuki; Bredberg, Göran; Ulfendahl, Mats; Pyykkö, Ilmari; Petri Olivius, N.; Kaksonen, Risto; Lindström, Bo; Altschuler, Richard; Miller, Josef M.

    2002-02-01

    A primary cause of deafness is damage of receptor cells in the inner ear. Clinically, it has been demonstrated that effective functionality can be provided by electrical stimulation of the auditory nerve, thus bypassing damaged receptor cells. However, subsequent to sensory cell loss there is a secondary degeneration of the afferent nerve fibers, resulting in reduced effectiveness of such cochlear prostheses. The effects of neurotrophic factors were tested in a guinea pig cochlear prosthesis model. After chemical deafening to mimic the clinical situation, the neurotrophic factors brain-derived neurotrophic factor and an analogue of ciliary neurotrophic factor were infused directly into the cochlea of the inner ear for 26 days by using an osmotic pump system. An electrode introduced into the cochlea was used to elicit auditory responses just as in patients implanted with cochlear prostheses. Intervention with brain-derived neurotrophic factor and the ciliary neurotrophic factor analogue not only increased the survival of auditory spiral ganglion neurons, but significantly enhanced the functional responsiveness of the auditory system as measured by using electrically evoked auditory brainstem responses. This demonstration that neurotrophin intervention enhances threshold sensitivity within the auditory system will have great clinical importance for the treatment of deaf patients with cochlear prostheses. The findings have direct implications for the enhancement of responsiveness in deafferented peripheral nerves.

  14. Brainstem processing following unilateral and bilateral hearing-aid amplification.

    PubMed

    Dawes, Piers; Munro, Kevin J; Kalluri, Sridhar; Edwards, Brent

    2013-04-17

    Following previous research suggesting hearing-aid experience may induce functional plasticity at the peripheral level of the auditory system, click-evoked auditory brainstem response was recorded at first fitting and 12 weeks after hearing-aid use by unilateral and bilateral hearing-aid users. A control group of experienced hearing-aid users was tested over a similar time scale. No significant alterations in auditory brainstem response latency or amplitude were identified in any group. This does not support the hypothesis of plastic changes in the peripheral auditory system induced by hearing-aid use for 12 weeks.

  15. Auditory and language development in Mandarin-speaking children after cochlear implantation.

    PubMed

    Lu, Xing; Qin, Zhaobing

    2018-04-01

    To evaluate early auditory performance, speech perception and language skills in Mandarin-speaking prelingual deaf children in the first two years after they received a cochlear implant (CI) and analyse the effects of possible associated factors. The Infant-Toddler Meaningful Auditory Integration Scale (ITMAIS)/Meaningful Auditory Integration Scale (MAIS), Mandarin Early Speech Perception (MESP) test and Putonghua Communicative Development Inventory (PCDI) were used to assess auditory and language outcomes in 132 Mandarin-speaking children pre- and post-implantation. Children with CIs exhibited an ITMAIS/MAIS and PCDI developmental trajectory similar to that of children with normal hearing. The increased number of participants who achieved MESP categories 1-6 at each test interval showed a significant improvement in speech perception by paediatric CI recipients. Age at implantation and socioeconomic status were consistently associated with both auditory and language outcomes in the first two years post-implantation. Mandarin-speaking children with CIs exhibit significant improvements in early auditory and language development. Though these improvements followed the normative developmental trajectories, they still exhibited a gap compared with normative values. Earlier implantation and higher socioeconomic status are consistent predictors of greater auditory and language skills in the early stage. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Basic Auditory Processing Skills and Phonological Awareness in Low-IQ Readers and Typically Developing Controls

    ERIC Educational Resources Information Center

    Kuppen, Sarah; Huss, Martina; Fosker, Tim; Fegan, Natasha; Goswami, Usha

    2011-01-01

    We explore the relationships between basic auditory processing, phonological awareness, vocabulary, and word reading in a sample of 95 children, 55 typically developing children, and 40 children with low IQ. All children received nonspeech auditory processing tasks, phonological processing and literacy measures, and a receptive vocabulary task.…

  17. Adding sound to theory of mind: Comparing children's development of mental-state understanding in the auditory and visual realms.

    PubMed

    Hasni, Anita A; Adamson, Lauren B; Williamson, Rebecca A; Robins, Diana L

    2017-12-01

    Theory of mind (ToM) gradually develops during the preschool years. Measures of ToM usually target visual experience, but auditory experiences also provide valuable social information. Given differences between the visual and auditory modalities (e.g., sights persist, sounds fade) and the important role environmental input plays in social-cognitive development, we asked whether modality might influence the progression of ToM development. The current study expands Wellman and Liu's ToM scale (2004) by testing 66 preschoolers using five standard visual ToM tasks and five newly crafted auditory ToM tasks. Age and gender effects were found, with 4- and 5-year-olds demonstrating greater ToM abilities than 3-year-olds and girls passing more tasks than boys; there was no significant effect of modality. Both visual and auditory tasks formed a scalable set. These results indicate that there is considerable consistency in when children are able to use visual and auditory inputs to reason about various aspects of others' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Atypical brain lateralisation in the auditory cortex and language performance in 3- to 7-year-old children with high-functioning autism spectrum disorder: a child-customised magnetoencephalography (MEG) study.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Shitamichi, Kiyomi; Ueno, Sanae; Munesue, Toshio; Ono, Yasuki; Tsubokawa, Tsunehisa; Haruta, Yasuhiro; Oi, Manabu; Niida, Yo; Remijn, Gerard B; Takahashi, Tsutomu; Suzuki, Michio; Higashida, Haruhiro; Minabe, Yoshio

    2013-10-08

    Magnetoencephalography (MEG) is used to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. In young children, however, the simultaneous quantification of the bilateral auditory-evoked response during binaural hearing is difficult using conventional adult-sized MEG systems. Recently, a child-customised MEG device has facilitated the acquisition of bi-hemispheric recordings, even in young children. Using the child-customised MEG device, we previously reported that language-related performance was reflected in the strength of the early component (P50m) of the auditory evoked magnetic field (AEF) in typically developing (TD) young children (2 to 5 years old) [Eur J Neurosci 2012, 35:644-650]. The aim of this study was to investigate how this neurophysiological index in each hemisphere is correlated with language performance in autism spectrum disorder (ASD) and TD children. We used magnetoencephalography (MEG) to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. We investigated the P50m that is evoked by voice stimuli (/ne/) bilaterally in 33 young children (3 to 7 years old) with ASD and in 30 young children who were typically developing (TD). The children were matched according to their age (in months) and gender. Most of the children with ASD were high-functioning subjects. The results showed that the children with ASD exhibited significantly less leftward lateralisation in their P50m intensity compared with the TD children. Furthermore, the results of a multiple regression analysis indicated that a shorter P50m latency in both hemispheres was specifically correlated with higher language-related performance in the TD children, whereas this latency was not correlated with non-verbal cognitive performance or chronological age. The children with ASD did not show any correlation between P50m latency and language-related performance; instead, increasing chronological age was a significant predictor of shorter P50m latency in the right hemisphere. Using a child-customised MEG device, we studied the P50m component that was evoked through binaural human voice stimuli in young ASD and TD children to examine differences in auditory cortex function that are associated with language development. Our results suggest that there is atypical brain function in the auditory cortex in young children with ASD, regardless of language development.

  19. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.

  20. Engagement with the auditory processing system during targeted auditory cognitive training mediates changes in cognitive outcomes in individuals with schizophrenia

    PubMed Central

    Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B.; Loewy, Rachel; Vinogradov, Sophia

    2016-01-01

    BACKGROUND Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. METHODS 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. RESULTS We observed signifcant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed inter-individual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20–40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. CONCLUSIONS There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of inter-individual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. PMID:27617637

  1. Comparing the effect of auditory-only and auditory-visual modes in two groups of Persian children using cochlear implants: a randomized clinical trial.

    PubMed

    Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam

    2013-09-01

    Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school when communicating with their parents and educators prior to and after implantation. The trial has been registered at IRCT.ir, number IRCT201109267637N1. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Auditory training improves auditory performance in cochlear implanted children.

    PubMed

    Roman, Stephane; Rochette, Françoise; Triglia, Jean-Michel; Schön, Daniele; Bigand, Emmanuel

    2016-07-01

    While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that rehabilitative care is most efficient when it takes place early on during childhood. These results are important to pinpoint the auditory deficits in CI children, to gather a better understanding of the links between basic auditory skills and speech perception which will in turn allow more efficient rehabilitative programs. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Evaluation of auditory perception development in neonates by event-related potential technique.

    PubMed

    Zhang, Qinfen; Li, Hongxin; Zheng, Aibin; Dong, Xuan; Tu, Wenjuan

    2017-08-01

    To investigate auditory perception development in neonates and correlate it with days after birth, left and right hemisphere development and sex using event-related potential (ERP) technique. Sixty full-term neonates, consisting of 32 males and 28 females, aged 2-28days were included in this study. An auditory oddball paradigm was used to elicit ERPs. N2 wave latencies and areas were recorded at different days after birth, to study on relationship between auditory perception and age, and comparison of left and right hemispheres, and males and females. Average wave forms of ERPs in neonates started from relatively irregular flat-bottomed troughs to relatively regular steep-sided ripples. A good linear relationship between ERPs and days after birth in neonates was observed. As days after birth increased, N2 latencies gradually and significantly shortened, and N2 areas gradually and significantly increased (both P<0.01). N2 areas in the central part of the brain were significantly greater, and N2 latencies in the central part were significantly shorter in the left hemisphere compared with the right, indicative of left hemisphere dominance (both P<0.05). N2 areas were greater and N2 latencies shorter in female neonates compared with males. The neonatal period is one of rapid auditory perception development. In the days following birth, the auditory perception ability of neonates gradually increases. This occurs predominantly in the left hemisphere, with auditory perception ability appearing to develop earlier in female neonates than in males. ERP can be used as an objective index used to evaluate auditory perception development in neonates. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  4. Nonlinear Processing of Auditory Brainstem Response

    DTIC Science & Technology

    2001-10-25

    Kraków, Poland Abstract: - Auditory brainstem response potentials (ABR) are signals calculated from the EEG signals registered as responses to an...acoustic activation of the auditory system. The ABR signals provide an objective, diagnostic method, widely applied in examinations of hearing organs

  5. Maturation of Rapid Auditory Temporal Processing and Subsequent Nonword Repetition Performance in Children

    ERIC Educational Resources Information Center

    Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.

    2012-01-01

    According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…

  6. Corticofugal modulation of peripheral auditory responses

    PubMed Central

    Terreros, Gonzalo; Delano, Paul H.

    2015-01-01

    The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647

  7. The neural basis of visual dominance in the context of audio-visual object processing.

    PubMed

    Schmid, Carmen; Büchel, Christian; Rose, Michael

    2011-03-01

    Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Auditory interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  9. Inhibition of Repulsive Guidance Molecule, RGMa, Increases Afferent Synapse Formation with Auditory Hair Cells

    PubMed Central

    Brugeaud, Aurore; Tong, Mingjie; Luo, Li; Edge, Albert S.B.

    2017-01-01

    The peripheral fibers that extend from auditory neurons to hair cells are sensitive to damage, and replacement of the fibers and their afferent synapse with hair cells would be of therapeutic interest. Here, we show that RGMa, a repulsive guidance molecule previously shown to play a role in the development of the chick visual system, is expressed in the developing, newborn, and mature mouse inner ear. The effect of RGMa on synaptogenesis between afferent neurons and hair cells, from which afferent connections had been removed, was assessed. Contact of neural processes with hair cells and elaboration of postsynaptic densities at sites of the ribbon synapse were increased by treatment with a blocking antibody to RGMa, and pruning of auditory fibers to achieve the mature branching pattern of afferent neurons was accelerated. Inhibition by RGMa could thus explain why auditory neurons have a low capacity to regenerate peripheral processes: postnatal spiral ganglion neurons retain the capacity to send out processes that respond to signals for synapse formation, but expression of RGMa postnatally appears to be detrimental to regeneration of afferent hair cell innervation and antagonizes synaptogenesis. Increased synaptogenesis after inhibition of RGMa suggests that manipulation of guidance or inhibitory factors may provide a route to increase formation of new synapses at deafferented hair cells. PMID:24123853

  10. Home-based Early Intervention on Auditory and Speech Development in Mandarin-speaking Deaf Infants and Toddlers with Chronological Aged 7-24 Months.

    PubMed

    Yang, Ying; Liu, Yue-Hui; Fu, Ming-Fu; Li, Chun-Lin; Wang, Li-Yan; Wang, Qi; Sun, Xi-Bin

    2015-08-20

    Early auditory and speech development in home-based early intervention of infants and toddlers with hearing loss younger than 2 years are still spare in China. This study aimed to observe the development of auditory and speech in deaf infants and toddlers who were fitted with hearing aids and/or received cochlear implantation between the chronological ages of 7-24 months, and analyze the effect of chronological age and recovery time on auditory and speech development in the course of home-based early intervention. This longitudinal study included 55 hearing impaired children with severe and profound binaural deafness, who were divided into Group A (7-12 months), Group B (13-18 months) and Group C (19-24 months) based on the chronological age. Categories auditory performance (CAP) and speech intelligibility rating scale (SIR) were used to evaluate auditory and speech development at baseline and 3, 6, 9, 12, 18, and 24 months of habilitation. Descriptive statistics were used to describe demographic features and were analyzed by repeated measures analysis of variance. With 24 months of hearing intervention, 78% of the patients were able to understand common phrases and conversation without lip-reading, 96% of the patients were intelligible to a listener. In three groups, children showed the rapid growth of trend features in each period of habilitation. CAP and SIR scores have developed rapidly within 24 months after fitted auxiliary device in Group A, which performed much better auditory and speech abilities than Group B (P < 0.05) and Group C (P < 0.05). Group B achieved better results than Group C, whereas no significant differences were observed between Group B and Group C (P > 0.05). The data suggested the early hearing intervention and home-based habilitation benefit auditory and speech development. Chronological age and recovery time may be major factors for aural verbal outcomes in hearing impaired children. The development of auditory and speech in hearing impaired children may be relatively crucial in thefirst year's habilitation after fitted with the auxiliary device.

  11. Impact of Auditory Selective Attention on Verbal Short-Term Memory and Vocabulary Development

    ERIC Educational Resources Information Center

    Majerus, Steve; Heiligenstein, Lucie; Gautherot, Nathalie; Poncelet, Martine; Van der Linden, Martial

    2009-01-01

    This study investigated the role of auditory selective attention capacities as a possible mediator of the well-established association between verbal short-term memory (STM) and vocabulary development. A total of 47 6- and 7-year-olds were administered verbal immediate serial recall and auditory attention tasks. Both task types probed processing…

  12. Developmental Trends in Auditory Processing Can Provide Early Predictions of Language Acquisition in Young Infants

    ERIC Educational Resources Information Center

    Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R.; Shao, Jie; Lozoff, Betsy

    2013-01-01

    Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with…

  13. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.

    PubMed

    Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  14. Theoretical Tinnitus Framework: A Neurofunctional Model.

    PubMed

    Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C B; Sani, Siamak S; Ekhtiari, Hamed; Sanchez, Tanit G

    2016-01-01

    Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the "sourceless" sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques.

  15. Theoretical Tinnitus Framework: A Neurofunctional Model

    PubMed Central

    Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C. B.; Sani, Siamak S.; Ekhtiari, Hamed; Sanchez, Tanit G.

    2016-01-01

    Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the “sourceless” sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques. PMID:27594822

  16. The Development of Auditory Perception in Children Following Auditory Brainstem Implantation

    PubMed Central

    Colletti, Liliana; Shannon, Robert V.; Colletti, Vittorio

    2014-01-01

    Auditory brainstem implants (ABI) can provide useful auditory perception and language development in deaf children who are not able to use a cochlear implant (CI). We prospectively followed-up a consecutive group of 64 deaf children up to 12 years following ABI implantation. The etiology of deafness in these children was: cochlear nerve aplasia in 49, auditory neuropathy in 1, cochlear malformations in 8, bilateral cochlear post-meningitic ossification in 3, NF2 in 2, and bilateral cochlear fractures due to a head injury in 1. Thirty five children had other congenital non-auditory disabilities. Twenty two children had previous CIs with no benefit. Fifty eight children were fitted with the Cochlear 24 ABI device and six with the MedEl ABI device and all children followed the same rehabilitation program. Auditory perceptual abilities were evaluated on the Categories of Auditory Performance (CAP) scale. No child was lost to follow-up and there were no exclusions from the study. All children showed significant improvement in auditory perception with implant experience. Seven children (11%) were able to achieve the highest score on the CAP test; they were able to converse on the telephone within 3 years of implantation. Twenty children (31.3%) achieved open set speech recognition (CAP score of 5 or greater) and 30 (46.9%) achieved a CAP level of 4 or greater. Of the 29 children without non-auditory disabilities, 18 (62%) achieved a CAP score of 5 or greater with the ABI. All children showed continued improvements in auditory skills over time. The long-term results of ABI implantation reveal significant auditory benefit in most children, and open set auditory recognition in many. PMID:25377987

  17. Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    ERIC Educational Resources Information Center

    Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean

    2015-01-01

    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…

  18. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    ERIC Educational Resources Information Center

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…

  19. Speech Recognition and Parent Ratings From Auditory Development Questionnaires in Children Who Are Hard of Hearing.

    PubMed

    McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.

  20. Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System

    PubMed Central

    Anderson, Lucy A.

    2016-01-01

    High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621

  1. Sensitivity and specificity of auditory steady‐state response testing

    PubMed Central

    Rabelo, Camila Maia; Schochat, Eliane

    2011-01-01

    INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442

  2. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    PubMed Central

    Zaltz, Yael; Globerson, Eitan; Amir, Noam

    2017-01-01

    The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318

  3. A Description of a Prototype System at NTID which Merges Computer Assisted Instruction and Instructional Television.

    ERIC Educational Resources Information Center

    vonFeldt, James R.

    The development of a prototype system is described which merges the strengths of computer assisted instruction, data gathering, interactive learning, individualized instruction, and the motion in color, and audio features of television. Creation of the prototype system will allow testing of both TV and interactive CAI/TV strategies in auditory and…

  4. Skills for Academic Improvement: A Guide for How-to-Study Counselors.

    DTIC Science & Technology

    1982-06-01

    auditory neurological system beyond the car. Auditory perception consists of essentially eight components, which are: 1. Auditory attention ...34daydreaming" or difficulty following lectures in different classes may be an indication of problems with auditory attention . 2. Sound localization...says. The counselor must listen not only attentively to what the cadet says, but must learn to listen perceptively for what the cadet really means. The

  5. Auditory Spatial Perception: Auditory Localization

    DTIC Science & Technology

    2012-05-01

    cochlear nucleus, TB – trapezoid body, SOC – superior olivary complex, LL – lateral lemniscus, IC – inferior colliculus. Adapted from Aharonson and...Figure 5. Auditory pathways in the central nervous system. LE – left ear, RE – right ear, AN – auditory nerve, CN – cochlear nucleus, TB...fibers leaving the left and right inner ear connect directly to the synaptic inputs of the cochlear nucleus (CN) on the same (ipsilateral) side of

  6. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    PubMed Central

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  7. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning.

    PubMed

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc

    2013-06-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning

    PubMed Central

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H.R.; Schmidt, Marc

    2015-01-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC’s auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf’s involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. PMID:23603062

  9. Underlying Mechanisms of Tinnitus: Review and Clinical Implications

    PubMed Central

    Henry, James A.; Roberts, Larry E.; Caspary, Donald M.; Theodoroff, Sarah M.; Salvi, Richard J.

    2016-01-01

    Background The study of tinnitus mechanisms has increased tenfold in the last decade. The common denominator for all of these studies is the goal of elucidating the underlying neural mechanisms of tinnitus with the ultimate purpose of finding a cure. While these basic science findings may not be immediately applicable to the clinician who works directly with patients to assist them in managing their reactions to tinnitus, a clear understanding of these findings is needed to develop the most effective procedures for alleviating tinnitus. Purpose The goal of this review is to provide audiologists and other health-care professionals with a basic understanding of the neurophysiological changes in the auditory system likely to be responsible for tinnitus. Results It is increasingly clear that tinnitus is a pathology involving neuroplastic changes in central auditory structures that take place when the brain is deprived of its normal input by pathology in the cochlea. Cochlear pathology is not always expressed in the audiogram but may be detected by more sensitive measures. Neural changes can occur at the level of synapses between inner hair cells and the auditory nerve and within multiple levels of the central auditory pathway. Long-term maintenance of tinnitus is likely a function of a complex network of structures involving central auditory and nonauditory systems. Conclusions Patients often have expectations that a treatment exists to cure their tinnitus. They should be made aware that research is increasing to discover such a cure and that their reactions to tinnitus can be mitigated through the use of evidence-based behavioral interventions. PMID:24622858

  10. A Design Architecture for an Integrated Training System Decision Support System

    DTIC Science & Technology

    1990-07-01

    Sensory modes include visual, auditory, tactile, or kinesthetic; performance categories include time to complete , speed of response, or correct action ...procedures, and finally application and examples from the aviation proponency with emphasis on the LHX program. Appendix B is a complete bibliography...integrated analysis of ITS development. The approach was designed to provide an accurate and complete representation of the ITS development process and

  11. Speech comprehension training and auditory and cognitive processing in older adults.

    PubMed

    Pichora-Fuller, M Kathleen; Levitt, Harry

    2012-12-01

    To provide a brief history of speech comprehension training systems and an overview of research on auditory and cognitive aging as background to recommendations for future directions for rehabilitation. Two distinct domains were reviewed: one concerning technological and the other concerning psychological aspects of training. Historical trends and advances in these 2 domains were interrelated to highlight converging trends and directions for future practice. Over the last century, technological advances have influenced both the design of hearing aids and training systems. Initially, training focused on children and those with severe loss for whom amplification was insufficient. Now the focus has shifted to older adults with relatively little loss but difficulties listening in noise. Evidence of brain plasticity from auditory and cognitive neuroscience provides new insights into how to facilitate perceptual (re-)learning by older adults. There is a new imperative to complement training to increase bottom-up processing of the signal with more ecologically valid training to boost top-down information processing based on knowledge of language and the world. Advances in digital technologies enable the development of increasingly sophisticated training systems incorporating complex meaningful materials such as music, audiovisual interactive displays, and conversation.

  12. Auditory Reserve and the Legacy of Auditory Experience

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381

  13. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency: an ERP study.

    PubMed

    van Zuijen, Titia L; Plakas, Anna; Maassen, Ben A M; Been, Pieter; Maurits, Natasha M; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-10-18

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or without familial risk for dyslexia to investigate whether their auditory system was able to detect a temporal change in a tone pattern. The children were followed longitudinally and performed an intelligence- and language development test at ages 4 and 4.5 years. Literacy related skills were measured at the beginning of second grade, and word- and pseudo-word reading fluency were measured at the end of second grade. The EEG responses showed that control children could detect the temporal change as indicated by a mismatch response (MMR). The MMR was not observed in at-risk children. Furthermore, the fronto-central MMR amplitude correlated with preliterate language comprehension and with later word reading fluency, but not with phonological awareness. We conclude that temporal auditory processing differentiates young children at risk for dyslexia from controls and is a precursor of preliterate language comprehension and reading fluency. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Engagement with the auditory processing system during targeted auditory cognitive training mediates changes in cognitive outcomes in individuals with schizophrenia.

    PubMed

    Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B; Loewy, Rachel; Vinogradov, Sophia

    2016-11-01

    Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. We observed significant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed interindividual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20 and 40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of interindividual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Blast exposure and dual sensory impairment: an evidence review and integrated rehabilitation approach.

    PubMed

    Saunders, Gabrielle H; Echt, Katharina V

    2012-01-01

    Combat exposures to blast can result in both peripheral damage to the ears and eyes and central damage to the auditory and visual processing areas in the brain. The functional effects of the latter include visual, auditory, and cognitive processing difficulties that manifest as deficits in attention, memory, and problem solving--symptoms similar to those seen in individuals with visual and auditory processing disorders. Coexisting damage to the auditory and visual system is referred to as dual sensory impairment (DSI). The number of Operation Iraqi Freedom/Operation Enduring Freedom Veterans with DSI is vast; yet currently no established models or guidelines exist for assessment, rehabilitation, or service-delivery practice. In this article, we review the current state of knowledge regarding blast exposure and DSI and outline the many unknowns in this area. Further, we propose a model for clinical assessment and rehabilitation of blast-related DSI that includes development of a coordinated team-based approach to target activity limitations and participation restrictions in order to enhance reintegration, recovery, and quality of life.

  16. Cortical contributions to the auditory frequency-following response revealed by MEG

    PubMed Central

    Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.

    2016-01-01

    The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409

  17. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  18. The activity of spontaneous action potentials in developing hair cells is regulated by Ca(2+)-dependence of a transient K+ current.

    PubMed

    Levic, Snezana; Lv, Ping; Yamoah, Ebenezer N

    2011-01-01

    Spontaneous action potentials have been described in developing sensory systems. These rhythmic activities may have instructional roles for the functional development of synaptic connections. The importance of spontaneous action potentials in the developing auditory system is underpinned by the stark correlation between the time of auditory system functional maturity, and the cessation of spontaneous action potentials. A prominent K(+) current that regulates patterning of action potentials is I(A). This current undergoes marked changes in expression during chicken hair cell development. Although the properties of I(A) are not normally classified as Ca(2+)-dependent, we demonstrate that throughout the development of chicken hair cells, I(A) is greatly reduced by acute alterations of intracellular Ca(2+). As determinants of spike timing and firing frequency, intracellular Ca(2+) buffers shift the activation and inactivation properties of the current to more positive potentials. Our findings provide evidence to demonstrate that the kinetics and functional expression of I(A) are tightly regulated by intracellular Ca(2+). Such feedback mechanism between the functional expression of I(A) and intracellular Ca(2+) may shape the activity of spontaneous action potentials, thus potentially sculpting synaptic connections in an activity-dependent manner in the developing cochlea. © 2011 Levic et al.

  19. Estradiol-dependent Modulation of Serotonergic Markers in Auditory Areas of a Seasonally Breeding Songbird

    PubMed Central

    Matragrano, Lisa L.; Sanford, Sara E.; Salvante, Katrina G.; Beaulieu, Michaël; Sockman, Keith W.; Maney, Donna L.

    2011-01-01

    Because no organism lives in an unchanging environment, sensory processes must remain plastic so that in any context, they emphasize the most relevant signals. As the behavioral relevance of sociosexual signals changes along with reproductive state, the perception of those signals is altered by reproductive hormones such as estradiol (E2). We showed previously that in white-throated sparrows, immediate early gene responses in the auditory pathway of females are selective for conspecific male song only when plasma E2 is elevated to breeding-typical levels. In this study, we looked for evidence that E2-dependent modulation of auditory responses is mediated by serotonergic systems. In female nonbreeding white-throated sparrows treated with E2, the density of fibers immunoreactive for serotonin transporter innervating the auditory midbrain and rostral auditory forebrain increased compared with controls. E2 treatment also increased the concentration of the serotonin metabolite 5-HIAA in the caudomedial mesopallium of the auditory forebrain. In a second experiment, females exposed to 30 min of conspecific male song had higher levels of 5-HIAA in the caudomedial nidopallium of the auditory forebrain than birds not exposed to song. Overall, we show that in this seasonal breeder, (1) serotonergic fibers innervate auditory areas; (2) the density of those fibers is higher in females with breeding-typical levels of E2 than in nonbreeding, untreated females; and (3) serotonin is released in the auditory forebrain within minutes in response to conspecific vocalizations. Our results are consistent with the hypothesis that E2 acts via serotonin systems to alter auditory processing. PMID:21942431

  20. Towards a neural basis of music perception.

    PubMed

    Koelsch, Stefan; Siebel, Walter A

    2005-12-01

    Music perception involves complex brain functions underlying acoustic analysis, auditory memory, auditory scene analysis, and processing of musical syntax and semantics. Moreover, music perception potentially affects emotion, influences the autonomic nervous system, the hormonal and immune systems, and activates (pre)motor representations. During the past few years, research activities on different aspects of music processing and their neural correlates have rapidly progressed. This article provides an overview of recent developments and a framework for the perceptual side of music processing. This framework lays out a model of the cognitive modules involved in music perception, and incorporates information about the time course of activity of some of these modules, as well as research findings about where in the brain these modules might be located.

  1. The dispersion-focalization theory of sound systems

    NASA Astrophysics Data System (ADS)

    Schwartz, Jean-Luc; Abry, Christian; Boë, Louis-Jean; Vallée, Nathalie; Ménard, Lucie

    2005-04-01

    The Dispersion-Focalization Theory states that sound systems in human languages are shaped by two major perceptual constraints: dispersion driving auditory contrast towards maximal or sufficient values [B. Lindblom, J. Phonetics 18, 135-152 (1990)] and focalization driving auditory spectra towards patterns with close neighboring formants. Dispersion is computed from the sum of the inverse squared inter-spectra distances in the (F1, F2, F3, F4) space, using a non-linear process based on the 3.5 Bark critical distance to estimate F2'. Focalization is based on the idea that close neighboring formants produce vowel spectra with marked peaks, easier to process and memorize in the auditory system. Evidence for increased stability of focal vowels in short-term memory was provided in a discrimination experiment on adult French subjects [J. L. Schwartz and P. Escudier, Speech Comm. 8, 235-259 (1989)]. A reanalysis of infant discrimination data shows that focalization could well be the responsible for recurrent discrimination asymmetries [J. L. Schwartz et al., Speech Comm. (in press)]. Recent data about children vowel production indicate that focalization seems to be part of the perceptual templates driving speech development. The Dispersion-Focalization Theory produces valid predictions for both vowel and consonant systems, in relation with available databases of human languages inventories.

  2. Testing Convergent Evolution in Auditory Processing Genes between Echolocating Mammals and the Aye-Aye, a Percussive-Foraging Primate

    PubMed Central

    Jerjos, Michael; Hohman, Baily; Lauterbur, M. Elise; Kistler, Logan

    2017-01-01

    Abstract Several taxonomically distinct mammalian groups—certain microbats and cetaceans (e.g., dolphins)—share both morphological adaptations related to echolocation behavior and strong signatures of convergent evolution at the amino acid level across seven genes related to auditory processing. Aye-ayes (Daubentonia madagascariensis) are nocturnal lemurs with a specialized auditory processing system. Aye-ayes tap rapidly along the surfaces of trees, listening to reverberations to identify the mines of wood-boring insect larvae; this behavior has been hypothesized to functionally mimic echolocation. Here we investigated whether there are signals of convergence in auditory processing genes between aye-ayes and known mammalian echolocators. We developed a computational pipeline (Basic Exon Assembly Tool) that produces consensus sequences for regions of interest from shotgun genomic sequencing data for nonmodel organisms without requiring de novo genome assembly. We reconstructed complete coding region sequences for the seven convergent echolocating bat–dolphin genes for aye-ayes and another lemur. We compared sequences from these two lemurs in a phylogenetic framework with those of bat and dolphin echolocators and appropriate nonecholocating outgroups. Our analysis reaffirms the existence of amino acid convergence at these loci among echolocating bats and dolphins; some methods also detected signals of convergence between echolocating bats and both mice and elephants. However, we observed no significant signal of amino acid convergence between aye-ayes and echolocating bats and dolphins, suggesting that aye-aye tap-foraging auditory adaptations represent distinct evolutionary innovations. These results are also consistent with a developing consensus that convergent behavioral ecology does not reliably predict convergent molecular evolution. PMID:28810710

  3. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy

    PubMed Central

    Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio

    2015-01-01

    Objective Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual’s capacity to drive safely. Methods The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Results Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. Conclusion We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely. PMID:25709406

  4. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy.

    PubMed

    Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio

    2015-01-01

    Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual's capacity to drive safely. The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely.

  5. Brain-Derived Neurotrophic Factor (BDNF) Promotes Cochlear Spiral Ganglion Cell Survival and Function in Deafened, Developing Cats

    PubMed Central

    Leake, Patricia A.; Hradek, Gary T.; Hetherington, Alexander M.; Stakhovskaya, Olga

    2011-01-01

    Postnatal development and survival of spiral ganglion (SG) neurons depend upon both neural activity and neurotrophic support. Our previous studies showed that electrical stimulation from a cochlear implant only partly prevents SG degeneration after early deafness. Thus, neurotrophic agents that might be combined with an implant to improve neural survival are of interest. Recent studies reporting that BDNF promotes SG survival after deafness, have been conducted in rodents and limited to relatively short durations. Our study examined longer duration BDNF treatment in deafened cats that may better model the slow progression of SG degeneration in human cochleae and provides the first study of BDNF in the developing auditory system. Kittens were deafened neonatally, implanted at 4-5 weeks with intracochlear electrodes containing a drug-delivery cannula, and BDNF or artificial perilymph was infused for 10 weeks from a mini-osmotic pump. In BDNF-treated cochleae SG cells grew to normal size and were significantly larger than cells on the contralateral side. However, their morphology was not completely normal and many neurons lacked or had thinned perikaryl myelin. Unbiased stereology was employed to estimate SG cell density, independent of cell size. BDNF was effective in promoting significantly improved survival of SG neurons in these developing animals. BDNF treatment also resulted in higher density and larger size of myelinated radial nerve fibers, sprouting of fibers into the scala tympani, and improvement in electrically-evoked auditory brainstem response thresholds. Although BDNF may have potential therapeutic value in the developing auditory system, many serious obstacles currently preclude clinical application. PMID:21452221

  6. Auditory processing disorders, verbal disfluency, and learning difficulties: a case study.

    PubMed

    Jutras, Benoît; Lagacé, Josée; Lavigne, Annik; Boissonneault, Andrée; Lavoie, Charlen

    2007-01-01

    This case study reports the findings of auditory behavioral and electrophysiological measures performed on a graduate student (identified as LN) presenting verbal disfluency and learning difficulties. Results of behavioral audiological testing documented the presence of auditory processing disorders, particularly temporal processing and binaural integration. Electrophysiological test results, including middle latency, late latency and cognitive potentials, revealed that LN's central auditory system processes acoustic stimuli differently to a reference group with normal hearing.

  7. Neuronal connectivity and interactions between the auditory and limbic systems. Effects of noise and tinnitus.

    PubMed

    Kraus, Kari Suzanne; Canlon, Barbara

    2012-06-01

    Acoustic experience such as sound, noise, or absence of sound induces structural or functional changes in the central auditory system but can also affect limbic regions such as the amygdala and hippocampus. The amygdala is particularly sensitive to sound with valence or meaning, such as vocalizations, crying or music. The amygdala plays a central role in auditory fear conditioning, regulation of the acoustic startle response and can modulate auditory cortex plasticity. A stressful acoustic stimulus, such as noise, causes amygdala-mediated release of stress hormones via the HPA-axis, which may have negative effects on health, as well as on the central nervous system. On the contrary, short-term exposure to stress hormones elicits positive effects such as hearing protection. The hippocampus can affect auditory processing by adding a temporal dimension, as well as being able to mediate novelty detection via theta wave phase-locking. Noise exposure affects hippocampal neurogenesis and LTP in a manner that affects structural plasticity, learning and memory. Tinnitus, typically induced by hearing malfunctions, is associated with emotional stress, depression and anatomical changes of the hippocampus. In turn, the limbic system may play a role in the generation as well as the suppression of tinnitus indicating that the limbic system may be essential for tinnitus treatment. A further understanding of auditory-limbic interactions will contribute to future treatment strategies of tinnitus and noise trauma. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Injury- and Use-Related Plasticity in the Adult Auditory System.

    ERIC Educational Resources Information Center

    Irvine, Dexter R. F.

    2000-01-01

    This article discusses findings concerning the plasticity of auditory cortical processing mechanisms in adults, including the effects of restricted cochlear damage or behavioral training with acoustic stimuli on the frequency selectivity of auditory cortical neurons and evidence for analogous injury- and use-related plasticity in the adult human…

  9. Understanding the neurophysiological basis of auditory abilities for social communication: a perspective on the value of ethological paradigms.

    PubMed

    Bennur, Sharath; Tsunada, Joji; Cohen, Yale E; Liu, Robert C

    2013-11-01

    Acoustic communication between animals requires them to detect, discriminate, and categorize conspecific or heterospecific vocalizations in their natural environment. Laboratory studies of the auditory-processing abilities that facilitate these tasks have typically employed a broad range of acoustic stimuli, ranging from natural sounds like vocalizations to "artificial" sounds like pure tones and noise bursts. However, even when using vocalizations, laboratory studies often test abilities like categorization in relatively artificial contexts. Consequently, it is not clear whether neural and behavioral correlates of these tasks (1) reflect extensive operant training, which drives plastic changes in auditory pathways, or (2) the innate capacity of the animal and its auditory system. Here, we review a number of recent studies, which suggest that adopting more ethological paradigms utilizing natural communication contexts are scientifically important for elucidating how the auditory system normally processes and learns communication sounds. Additionally, since learning the meaning of communication sounds generally involves social interactions that engage neuromodulatory systems differently than laboratory-based conditioning paradigms, we argue that scientists need to pursue more ethological approaches to more fully inform our understanding of how the auditory system is engaged during acoustic communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  10. High-throughput behavioral screening method for detecting auditory response defects in zebrafish.

    PubMed

    Bang, Pascal I; Yelick, Pamela C; Malicki, Jarema J; Sewell, William F

    2002-08-30

    We have developed an automated, high-throughput behavioral screening method for detecting hearing defects in zebrafish. Our assay monitors a rapid escape reflex in response to a loud sound. With this approach, 36 adult zebrafish, restrained in visually isolated compartments, can be simultaneously assessed for responsiveness to near-field 400 Hz sinusoidal tone bursts. Automated, objective determinations of responses are achieved with a computer program that obtains images at precise times relative to the acoustic stimulus. Images taken with a CCD video camera before and after stimulus presentation are subtracted to reveal a response to the sound. Up to 108 fish can be screened per hour. Over 6500 fish were tested to validate the reliability of the assay. We found that 1% of these animals displayed hearing deficits. The phenotypes of non-responders were further assessed with radiological analysis for defects in the gross morphology of the auditory system. Nearly all of those showed abnormalities in conductive elements of the auditory system: the swim bladder or Weberian ossicles. Copyright 2002 Elsevier Science B.V.

  11. Transfer characteristics of the hair cell's afferent synapse

    NASA Astrophysics Data System (ADS)

    Keen, Erica C.; Hudspeth, A. J.

    2006-04-01

    The sense of hearing depends on fast, finely graded neurotransmission at the ribbon synapses connecting hair cells to afferent nerve fibers. The processing that occurs at this first chemical synapse in the auditory pathway determines the quality and extent of the information conveyed to the central nervous system. Knowledge of the synapse's input-output function is therefore essential for understanding how auditory stimuli are encoded. To investigate the transfer function at the hair cell's synapse, we developed a preparation of the bullfrog's amphibian papilla. In the portion of this receptor organ representing stimuli of 400-800 Hz, each afferent nerve fiber forms several synaptic terminals onto one to three hair cells. By performing simultaneous voltage-clamp recordings from presynaptic hair cells and postsynaptic afferent fibers, we established that the rate of evoked vesicle release, as determined from the average postsynaptic current, depends linearly on the amplitude of the presynaptic Ca2+ current. This result implies that, for receptor potentials in the physiological range, the hair cell's synapse transmits information with high fidelity. auditory system | exocytosis | glutamate | ribbon synapse | synaptic vesicle

  12. Coding of sounds in the auditory system and its relevance to signal processing and coding in cochlear implants.

    PubMed

    Moore, Brian C J

    2003-03-01

    To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.

  13. Central Auditory Maturation and Behavioral Outcome in Children with Auditory Neuropathy Spectrum Disorder who Use Cochlear Implants

    PubMed Central

    Cardon, Garrett; Sharma, Anu

    2013-01-01

    Objective We examined cortical auditory development and behavioral outcomes in children with ANSD fitted with cochlear implants (CI). Design Cortical maturation, measured by P1 cortical auditory evoked potential (CAEP) latency, was regressed against scores on the Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS). Implantation age was also considered in relation to CAEP findings. Study Sample Cross-sectional and longitudinal samples of 24 and 11 children, respectively, with ANSD fitted with CIs. Result P1 CAEP responses were present in all children after implantation, though previous findings suggest that only 50-75% of ANSD children with hearing aids show CAEP responses. P1 CAEP latency was significantly correlated with participants' IT-MAIS scores. Furthermore, more children implanted before age two years showed normal P1 latencies, while those implanted later mainly showed delayed latencies. Longitudinal analysis revealed that most children showed normal or improved cortical maturation after implantation. Conclusion Cochlear implantation resulted in measureable cortical auditory development for all children with ANSD. Children fitted with CIs under age two years were more likely to show age-appropriate CAEP responses within 6 months after implantation, suggesting a possible sensitive period for cortical auditory development in ANSD. That CAEP responses were correlated with behavioral outcome highlights their clinical decision-making utility. PMID:23819618

  14. Plasticity in the Developing Auditory Cortex: Evidence from Children with Sensorineural Hearing Loss and Auditory Neuropathy Spectrum Disorder

    PubMed Central

    Cardon, Garrett; Campbell, Julia; Sharma, Anu

    2013-01-01

    The developing auditory cortex is highly plastic. As such, the cortex is both primed to mature normally and at risk for re-organizing abnormally, depending upon numerous factors that determine central maturation. From a clinical perspective, at least two major components of development can be manipulated: 1) input to the cortex and 2) the timing of cortical input. Children with sensorineural hearing loss (SNHL) and auditory neuropathy spectrum disorder (ANSD) have provided a model of early deprivation of sensory input to the cortex, and demonstrated the resulting plasticity and development that can occur upon introduction of stimulation. In this article, we review several fundamental principles of cortical development and plasticity and discuss the clinical applications in children with SNHL and ANSD who receive intervention with hearing aids and/or cochlear implants. PMID:22668761

  15. Assessing attentional systems in children with Attention Deficit Hyperactivity Disorder.

    PubMed

    Casagrande, Maria; Martella, Diana; Ruggiero, Maria Cleonice; Maccari, Lisa; Paloscia, Claudio; Rosa, Caterina; Pasini, Augusto

    2012-01-01

    The aim of this study was to evaluate the efficiency and interactions of attentional systems in children with Attention Deficit Hyperactivity Disorder (ADHD) by considering the effects of reinforcement and auditory warning on each component of attention. Thirty-six drug-naïve children (18 children with ADHD/18 typically developing children) performed two revised versions of the Attentional Network Test, which assess the efficiency of alerting, orienting, and executive systems. In feedback trials, children received feedback about their accuracy, whereas in the no-feedback trials, feedback was not given. In both conditions, children with ADHD performed more slowly than did typically developing children. They also showed impairments in the ability to disengage attention and in executive functioning, which improved when alertness was increased by administering the auditory warning. The performance of the attentional networks appeared to be modulated by the absence or the presence of reinforcement. We suggest that the observed executive system deficit in children with ADHD could depend on their low level of arousal rather than being an independent disorder. © The Author 2011. Published by Oxford University Press. All rights reserved.

  16. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    PubMed

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self-exciting system is a key element for qualitatively reproducing A1 population activity and to understand the underlying mechanisms. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Evolutionary diversification of the auditory organ sensilla in Neoconocephalus katydids (Orthoptera: Tettigoniidae) correlates with acoustic signal diversification over phylogenetic relatedness and life history.

    PubMed

    Strauß, J; Alt, J A; Ekschmitt, K; Schul, J; Lakes-Harlan, R

    2017-06-01

    Neoconocephalus Tettigoniidae are a model for the evolution of acoustic signals as male calls have diversified in temporal structure during the radiation of the genus. The call divergence and phylogeny in Neoconocephalus are established, but in tettigoniids in general, accompanying evolutionary changes in hearing organs are not studied. We investigated anatomical changes of the tympanal hearing organs during the evolutionary radiation and divergence of intraspecific acoustic signals. We compared the neuroanatomy of auditory sensilla (crista acustica) from nine Neoconocephalus species for the number of auditory sensilla and the crista acustica length. These parameters were correlated with differences in temporal call features, body size, life histories and different phylogenetic positions. By this, adaptive responses to shifting frequencies of male calls and changes in their temporal patterns can be evaluated against phylogenetic constraints and allometry. All species showed well-developed auditory sensilla, on average 32-35 between species. Crista acustica length and sensillum numbers correlated with body size, but not with phylogenetic position or life history. Statistically significant correlations existed also with specific call patterns: a higher number of auditory sensilla occurred in species with continuous calls or slow pulse rates, and a longer crista acustica occurred in species with double pulses or slow pulse rates. The auditory sensilla show significant differences between species despite their recent radiation, and morphological and ecological similarities. This indicates the responses to natural and sexual selection, including divergence of temporal and spectral signal properties. Phylogenetic constraints are unlikely to limit these changes of the auditory systems. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  18. Comparison between the analysis of the loudness dependency of the auditory N1/P2 component with LORETA and dipole source analysis in the prediction of treatment response to the selective serotonin reuptake inhibitor citalopram in major depression.

    PubMed

    Mulert, C; Juckel, G; Augustin, H; Hegerl, U

    2002-10-01

    The loudness dependency of the auditory evoked potentials (LDAEP) is used as an indicator of the central serotonergic system and predicts clinical response to serotonin agonists. So far, LDAEP has been typically investigated with dipole source analysis, because with this method the primary and secondary auditory cortex (with a high versus low serotonergic innervation) can be separated at least in parts. We have developed a new analysis procedure that uses an MRI probabilistic map of the primary auditory cortex in Talairach space and analyzed the current density in this region of interest with low resolution electromagnetic tomography (LORETA). LORETA is a tomographic localization method that calculates the current density distribution in Talairach space. In a group of patients with major depression (n=15), this new method can predict the response to an selective serotonin reuptake inhibitor (citalopram) at least to the same degree than the traditional dipole source analysis method (P=0.019 vs. P=0.028). The correlation of the improvement in the Hamilton Scale is significant with the LORETA-LDAEP-values (0.56; P=0.031) but not with the dipole source analysis LDAEP-values (0.43; P=0.11). The new tomographic LDAEP analysis is a promising tool in the analysis of the central serotonergic system.

  19. High resolution 1H NMR-based metabonomic study of the auditory cortex analogue of developing chick (Gallus gallus domesticus) following prenatal chronic loud music and noise exposure.

    PubMed

    Kumar, Vivek; Nag, Tapas Chandra; Sharma, Uma; Mewar, Sujeet; Jagannathan, Naranamangalam R; Wadhwa, Shashi

    2014-10-01

    Proper functional development of the auditory cortex (ACx) critically depends on early relevant sensory experiences. Exposure to high intensity noise (industrial/traffic) and music, a current public health concern, may disrupt the proper development of the ACx and associated behavior. The biochemical mechanisms associated with such activity dependent changes during development are poorly understood. Here we report the effects of prenatal chronic (last 10 days of incubation), 110dB sound pressure level (SPL) music and noise exposure on metabolic profile of the auditory cortex analogue/field L (AuL) in domestic chicks. Perchloric acid extracts of AuL of post hatch day 1 chicks from control, music and noise groups were subjected to high resolution (700MHz) (1)H NMR spectroscopy. Multivariate regression analysis of the concentration data of 18 metabolites revealed a significant class separation between control and loud sound exposed groups, indicating a metabolic perturbation. Comparison of absolute concentration of metabolites showed that overstimulation with loud sound, independent of spectral characteristics (music or noise) led to extensive usage of major energy metabolites, e.g., glucose, β-hydroxybutyrate and ATP. On the other hand, high glutamine levels and sustained levels of neuromodulators and alternate energy sources, e.g., creatine, ascorbate and lactate indicated a systems restorative measure in a condition of neuronal hyperactivity. At the same time, decreased aspartate and taurine levels in the noise group suggested a differential impact of prenatal chronic loud noise over music exposure. Thus prenatal exposure to loud sound especially noise alters the metabolic activity in the AuL which in turn can affect the functional development and later auditory associated behaviour. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Selective Attention to Visual Stimuli Using Auditory Distractors Is Altered in Alpha-9 Nicotinic Receptor Subunit Knock-Out Mice.

    PubMed

    Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H

    2016-07-06

    During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we studied the behavioral consequences of adding different types of auditory distractors in a visual selective attention task in wild-type and α-9 nicotinic receptor knock-out (KO) mice. We demonstrate that KO mice perform poorly in the selective attention paradigm and that an intact medial olivocochlear transmission aids in ignoring auditory distractors during attention. Copyright © 2016 the authors 0270-6474/16/367198-12$15.00/0.

  1. Exploring the role of auditory analysis in atypical compared to typical language development.

    PubMed

    Grube, Manon; Cooper, Freya E; Kumar, Sukhbinder; Kelly, Tom; Griffiths, Timothy D

    2014-02-01

    The relationship between auditory processing and language skills has been debated for decades. Previous findings have been inconsistent, both in typically developing and impaired subjects, including those with dyslexia or specific language impairment. Whether correlations between auditory and language skills are consistent between different populations has hardly been addressed at all. The present work presents an exploratory approach of testing for patterns of correlations in a range of measures of auditory processing. In a recent study, we reported findings from a large cohort of eleven-year olds on a range of auditory measures and the data supported a specific role for the processing of short sequences in pitch and time in typical language development. Here we tested whether a group of individuals with dyslexic traits (DT group; n = 28) from the same year group would show the same pattern of correlations between auditory and language skills as the typically developing group (TD group; n = 173). Regarding the raw scores, the DT group showed a significantly poorer performance on the language but not the auditory measures, including measures of pitch, time and rhythm, and timbre (modulation). In terms of correlations, there was a tendency to decrease in correlations between short-sequence processing and language skills, contrasted by a significant increase in correlation for basic, single-sound processing, in particular in the domain of modulation. The data support the notion that the fundamental relationship between auditory and language skills might differ in atypical compared to typical language development, with the implication that merging data or drawing inference between populations might be problematic. Further examination of the relationship between both basic sound feature analysis and music-like sound analysis and language skills in impaired populations might allow the development of appropriate training strategies. These might include types of musical training to augment language skills via their common bases in sound sequence analysis. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Electrophysiologic Assessment of Auditory Training Benefits in Older Adults

    PubMed Central

    Anderson, Samira; Jenkins, Kimberly

    2015-01-01

    Older adults often exhibit speech perception deficits in difficult listening environments. At present, hearing aids or cochlear implants are the main options for therapeutic remediation; however, they only address audibility and do not compensate for central processing changes that may accompany aging and hearing loss or declines in cognitive function. It is unknown whether long-term hearing aid or cochlear implant use can restore changes in central encoding of temporal and spectral components of speech or improve cognitive function. Therefore, consideration should be given to auditory/cognitive training that targets auditory processing and cognitive declines, taking advantage of the plastic nature of the central auditory system. The demonstration of treatment efficacy is an important component of any training strategy. Electrophysiologic measures can be used to assess training-related benefits. This article will review the evidence for neuroplasticity in the auditory system and the use of evoked potentials to document treatment efficacy. PMID:27587912

  3. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    PubMed Central

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160

  4. Influence of Acoustic Overstimulation on the Central Auditory System: An Functional Magnetic Resonance Imaging (fMRI) Study.

    PubMed

    Wolak, Tomasz; Cieśla, Katarzyna; Rusiniak, Mateusz; Piłka, Adam; Lewandowska, Monika; Pluta, Agnieszka; Skarżyński, Henryk; Skarżyński, Piotr H

    2016-11-28

    BACKGROUND The goal of the fMRI experiment was to explore the involvement of central auditory structures in pathomechanisms of a behaviorally manifested auditory temporary threshold shift in humans. MATERIAL AND METHODS The material included 18 healthy volunteers with normal hearing. Subjects in the exposure group were presented with 15 min of binaural acoustic overstimulation of narrowband noise (3 kHz central frequency) at 95 dB(A). The control group was not exposed to noise but instead relaxed in silence. Auditory fMRI was performed in 1 session before and 3 sessions after acoustic overstimulation and involved 3.5-4.5 kHz sweeps. RESULTS The outcomes of the study indicate a possible effect of acoustic overstimulation on central processing, with decreased brain responses to auditory stimulation up to 20 min after exposure to noise. The effect can be seen already in the primary auditory cortex. Decreased BOLD signal change can be due to increased excitation thresholds and/or increased spontaneous activity of auditory neurons throughout the auditory system. CONCLUSIONS The trial shows that fMRI can be a valuable tool in acoustic overstimulation studies but has to be used with caution and considered complimentary to audiological measures. Further methodological improvements are needed to distinguish the effects of TTS and neuronal habituation to repetitive stimulation.

  5. Influence of Acoustic Overstimulation on the Central Auditory System: An Functional Magnetic Resonance Imaging (fMRI) Study

    PubMed Central

    Wolak, Tomasz; Cieśla, Katarzyna; Rusiniak, Mateusz; Piłka, Adam; Lewandowska, Monika; Pluta, Agnieszka; Skarżyński, Henryk; Skarżyński, Piotr H.

    2016-01-01

    Background The goal of the fMRI experiment was to explore the involvement of central auditory structures in pathomechanisms of a behaviorally manifested auditory temporary threshold shift in humans. Material/Methods The material included 18 healthy volunteers with normal hearing. Subjects in the exposure group were presented with 15 min of binaural acoustic overstimulation of narrowband noise (3 kHz central frequency) at 95 dB(A). The control group was not exposed to noise but instead relaxed in silence. Auditory fMRI was performed in 1 session before and 3 sessions after acoustic overstimulation and involved 3.5–4.5 kHz sweeps. Results The outcomes of the study indicate a possible effect of acoustic overstimulation on central processing, with decreased brain responses to auditory stimulation up to 20 min after exposure to noise. The effect can be seen already in the primary auditory cortex. Decreased BOLD signal change can be due to increased excitation thresholds and/or increased spontaneous activity of auditory neurons throughout the auditory system. Conclusions The trial shows that fMRI can be a valuable tool in acoustic overstimulation studies but has to be used with caution and considered complimentary to audiological measures. Further methodological improvements are needed to distinguish the effects of TTS and neuronal habituation to repetitive stimulation. PMID:27893698

  6. Is auditory perceptual timing a core deficit of developmental coordination disorder?

    PubMed

    Trainor, Laurel J; Chang, Andrew; Cairney, John; Li, Yao-Chuen

    2018-05-09

    Time is an essential dimension for perceiving and processing auditory events, and for planning and producing motor behaviors. Developmental coordination disorder (DCD) is a neurodevelopmental disorder affecting 5-6% of children that is characterized by deficits in motor skills. Studies show that children with DCD have motor timing and sensorimotor timing deficits. We suggest that auditory perceptual timing deficits may also be core characteristics of DCD. This idea is consistent with evidence from several domains, (1) motor-related brain regions are often involved in auditory timing process; (2) DCD has high comorbidity with dyslexia and attention deficit hyperactivity, which are known to be associated with auditory timing deficits; (3) a few studies report deficits in auditory-motor timing among children with DCD; and (4) our preliminary behavioral and neuroimaging results show that children with DCD at age 6 and 7 have deficits in auditory time discrimination compared to typically developing children. We propose directions for investigating auditory perceptual timing processing in DCD that use various behavioral and neuroimaging approaches. From a clinical perspective, research findings can potentially benefit our understanding of the etiology of DCD, identify early biomarkers of DCD, and can be used to develop evidence-based interventions for DCD involving auditory-motor training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of The New York Academy of Sciences.

  7. Sleep and Neurofunctions Throughout Child Development: Lasting Effects of Early Iron Deficiency

    PubMed Central

    Peirano, Patricio D.; Algarín, Cecilia R.; Chamorro, Rodrigo; Garrido, Marcelo I.; Lozoff, Betsy

    2013-01-01

    Iron-deficiency anemia (IDA) continues to be the most common single nutrient deficiency in the world. Infants are at particular risk due to rapid growth and limited dietary sources of iron. An estimated 20–25% of the world’s infants have IDA, with at least as many having iron deficiency without anemia. High prevalence is found primarily in developing countries, but also among poor, minority, and immigrant groups in developed ones. Infants with IDA test lower in mental and motor development assessments and show affective differences. After iron therapy, follow-up studies point to long-lasting differences in several domains. Neurofunctional studies showed slower neural transmission in the auditory system despite 1 year of iron therapy in IDA infants; they still had slower transmission in both the auditory and visual systems at preschool age. Different motor activity patterning in all sleep-waking states and several differences in sleep states organization were reported. Persistant sleep and neurofunctional effects could contribute to reduced potential for optimal behavioral and cognitive outcomes in children with a history of IDA. PMID:19214058

  8. Multivariable manual control with simultaneous visual and auditory presentation of information. [for improved compensatory tracking performance of human operator

    NASA Technical Reports Server (NTRS)

    Uhlemann, H.; Geiser, G.

    1975-01-01

    Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.

  9. Early electrophysiological markers of atypical language processing in prematurely born infants.

    PubMed

    Paquette, Natacha; Vannasing, Phetsamone; Tremblay, Julie; Lefebvre, Francine; Roy, Marie-Sylvie; McKerral, Michelle; Lepore, Franco; Lassonde, Maryse; Gallagher, Anne

    2015-12-01

    Because nervous system development may be affected by prematurity, many prematurely born children present language or cognitive disorders at school age. The goal of this study is to investigate whether these impairments can be identified early in life using electrophysiological auditory event-related potentials (AERPs) and mismatch negativity (MMN). Brain responses to speech and non-speech stimuli were assessed in prematurely born children to identify early electrophysiological markers of language and cognitive impairments. Participants were 74 children (41 full-term, 33 preterm) aged 3, 12, and 36 months. Pre-attentional auditory responses (MMN and AERPs) were assessed using an oddball paradigm, with speech and non-speech stimuli presented in counterbalanced order between participants. Language and cognitive development were assessed using the Bayley Scale of Infant Development, Third Edition (BSID-III). Results show that preterms as young as 3 months old had delayed MMN response to speech stimuli compared to full-terms. A significant negative correlation was also found between MMN latency to speech sounds and the BSID-III expressive language subscale. However, no significant differences between full-terms and preterms were found for the MMN to non-speech stimuli, suggesting preserved pre-attentional auditory discrimination abilities in these children. Identification of early electrophysiological markers for delayed language development could facilitate timely interventions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Noise Trauma Induced Neural Plasticity Throughout the Auditory System of Mongolian Gerbils: Differences between Tinnitus Developing and Non-Developing Animals

    PubMed Central

    Tziridis, Konstantin; Ahlf, Sönke; Jeschke, Marcus; Happel, Max F. K.; Ohl, Frank W.; Schulze, Holger

    2015-01-01

    In this study, we describe differences between neural plasticity in auditory cortex (AC) of animals that developed subjective tinnitus (group T) after noise-induced hearing loss (NIHL) compared to those that did not [group non-tinnitus (NT)]. To this end, our analysis focuses on the input activity of cortical neurons based on the temporal and spectral analysis of local field potential (LFP) recordings and an in-depth analysis of auditory brainstem responses (ABR) in the same animals. In response to NIHL in NT animals we find a significant general reduction in overall cortical activity and spectral power as well as changes in all ABR wave amplitudes as a function of loudness. In contrast, T-animals show no significant change in overall cortical activity as assessed by root mean square analysis of LFP amplitudes, but a specific increase in LFP spectral power and in the amplitude of ABR wave V reflecting activity in the inferior colliculus (IC). Based on these results, we put forward a refined model of tinnitus prevention after NIHL that acts via a top-down global (i.e., frequency-unspecific) inhibition reducing overall neuronal activity in AC and IC, thereby counteracting NIHL-induced bottom-up frequency-specific neuroplasticity suggested in current models of tinnitus development. PMID:25713557

  11. Candesartan ameliorates impaired fear extinction induced by innate immune activation.

    PubMed

    Quiñones, María M; Maldonado, Lizette; Velazquez, Bethzaly; Porter, James T

    2016-02-01

    Patients with post-traumatic stress disorder (PTSD) tend to show signs of a relatively increased inflammatory state suggesting that activation of the immune system may contribute to the development of PTSD. In the present study, we tested whether activation of the innate immune system can disrupt acquisition or recall of auditory fear extinction using an animal model of PTSD. Male adolescent rats received auditory fear conditioning in context A. The next day, an intraperitoneal injection of lipopolysaccharide (LPS; 100 μg/kg) prior to auditory fear extinction in context B impaired acquisition and recall of extinction. LPS (100 μg/kg) given after extinction training did not impair extinction recall suggesting that LPS did not affect consolidation of extinction. In contrast to cued fear extinction, contextual fear extinction was not affected by prior injection of LPS (100 μg/kg). Although LPS also reduced locomotion, we could dissociate the effects of LPS on extinction and locomotion by using a lower dose of LPS (50 μg/kg) which impaired locomotion without affecting extinction. In addition, 15 h after an injection of 250 μg/kg LPS in adult rats, extinction learning and recall were impaired without affecting locomotion. A sub-chronic treatment with candesartan, an angiotensin II type 1 receptor blocker, prevented the LPS-induced impairment of extinction in adult rats. Our results demonstrate that activation of the innate immune system can disrupt auditory fear extinction in adolescent and adult animals. These findings also provide direction for clinical studies of novel treatments that modulate the innate immune system for stress-related disorders like PTSD. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Temporary conductive hearing loss in early life impairs spatial memory of rats in adulthood.

    PubMed

    Zhao, Han; Wang, Li; Chen, Liang; Zhang, Jinsheng; Sun, Wei; Salvi, Richard J; Huang, Yi-Na; Wang, Ming; Chen, Lin

    2018-05-31

    It is known that an interruption of acoustic input in early life will result in abnormal development of the auditory system. Here, we further show that this negative impact actually spans beyond the auditory system to the hippocampus, a system critical for spatial memory. We induced a temporary conductive hearing loss (TCHL) in P14 rats by perforating the eardrum and allowing it to heal. The Morris water maze and Y-maze tests were deployed to evaluate spatial memory of the rats. Electrophysiological recordings and anatomical analysis were made to evaluate functional and structural changes in the hippocampus following TCHL. The rats with the TCHL had nearly normal hearing at P42, but had a decreased performance with the Morris water maze and Y-maze tests compared with the control group. A functional deficit in the hippocampus of the rats with the TCHL was found as revealed by the depressed long-term potentiation and the reduced NMDA receptor-mediated postsynaptic current. A structural deficit in the hippocampus of those animals was also found as revealed the abnormal expression of the NMDA receptors, the decreased number of dendritic spines, the reduced postsynaptic density and the reduced level of neurogenesis. Our study demonstrates that even temporary auditory sensory deprivation in early life of rats results in abnormal development of the hippocampus and consequently impairs spatial memory in adulthood. © 2018 The Authors. Brain and Behavior published by Wiley Periodicals, Inc.

  13. Constructing Noise-Invariant Representations of Sound in the Auditory Pathway

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.

    2013-01-01

    Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596

  14. Bilateral Capacity for Speech Sound Processing in Auditory Comprehension: Evidence from Wada Procedures

    ERIC Educational Resources Information Center

    Hickok, G.; Okada, K.; Barr, W.; Pa, J.; Rogalsky, C.; Donnelly, K.; Barde, L.; Grant, A.

    2008-01-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated…

  15. Responses of auditory-cortex neurons to structural features of natural sounds.

    PubMed

    Nelken, I; Rotman, Y; Bar Yosef, O

    1999-01-14

    Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.

  16. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    NASA Astrophysics Data System (ADS)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system, the regulation of time constant update for filters in signal/control path as well as level-independent frequency glides with fixed frequency modulation. First, we scrutinized performance development in keyword recognition using the proposed methods in quiet and noise-corrupted environments. The results argue that multi-scale integration should be used along with CE in order to avoid ambiguous continuity in unvoiced segments. Moreover, the inclusion of the all modifications was observed to guarantee the noise-type-independent robustness particularly with severe interference. Moreover, the CASA with the auditory model was implemented into a single/dual-channel ASR using reference TIMIT corpus so as to get more general result. Hidden Markov model (HTK) toolkit was used for phone recognition in various environmental conditions. In a single-channel ASR, the results argue that unmasked acoustic features (unmasked GFCC) should combine with target estimates from the mask to compensate for missing information. From the observation of a dual-channel ASR, the combined GFCC guarantees the highest performance regardless of interferences within speech. Moreover, consistent improvement of noise robustness by GFCC (unmasked or combined) shows the validity of our proposed CASA implementation in dual microphone system. In conclusion, the proposed framework proves the robustness of the acoustic features in various background interferences via both direct distance evaluation and statistical assessment. In addition, the introduction of dual microphone system using the framework in this study shows the potential of the effective implementation of the auditory model-based CASA in ASR.

  17. White matter microstructural properties correlate with sensorimotor synchronization abilities.

    PubMed

    Blecher, Tal; Tal, Idan; Ben-Shachar, Michal

    2016-09-01

    Sensorimotor synchronization (SMS) to an external auditory rhythm is a developed ability in humans, particularly evident in dancing and singing. This ability is typically measured in the lab via a simple task of finger tapping to an auditory beat. While simplistic, there is some evidence that poor performance on this task could be related to impaired phonological and reading abilities in children. Auditory-motor synchronization is hypothesized to rely on a tight coupling between auditory and motor neural systems, but the specific pathways that mediate this coupling have not been identified yet. In this study, we test this hypothesis and examine the contribution of fronto-temporal and callosal connections to specific measures of rhythmic synchronization. Twenty participants went through SMS and diffusion magnetic resonance imaging (dMRI) measurements. We quantified the mean asynchrony between an auditory beat and participants' finger taps, as well as the time to resynchronize (TTR) with an altered meter, and examined the correlations between these behavioral measures and diffusivity in a small set of predefined pathways. We found significant correlations between asynchrony and fractional anisotropy (FA) in the left (but not right) arcuate fasciculus and in the temporal segment of the corpus callosum. On the other hand, TTR correlated with FA in the precentral segment of the callosum. To our knowledge, this is the first demonstration that relates these particular white matter tracts with performance on an auditory-motor rhythmic synchronization task. We propose that left fronto-temporal and temporal-callosal fibers are involved in prediction and constant comparison between auditory inputs and motor commands, while inter-hemispheric connections between the motor/premotor cortices contribute to successful resynchronization of motor responses with a new external rhythm, perhaps via inhibition of tapping to the previous rhythm. Our results indicate that auditory-motor synchronization skills are associated with anatomical pathways that have been previously related to phonological awareness, thus offering a possible anatomical basis for the behavioral covariance between these abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Development of N-Methyl-D-Aspartate Receptor Subunits in Avian Auditory Brainstem

    PubMed Central

    TANG, YE-ZHONG; CARR, CATHERINE E.

    2012-01-01

    N-methyl-D-aspartate (NMDA) receptor subunit-specific probes were used to characterize developmental changes in the distribution of excitatory amino acid receptors in the chicken’s auditory brainstem nuclei. Although NR1 subunit expression does not change greatly during the development of the cochlear nuclei in the chicken (Tang and Carr [2004] Hear. Res 191:79 – 89), there are significant developmental changes in NR2 subunit expression. We used in situ hybridization against NR1, NR2A, NR2B, NR2C, and NR2D to compare NR1 and NR2 expression during development. All five NMDA subunits were expressed in the auditory brainstem before embryonic day (E) 10, when electrical activity and synaptic responses appear in the nucleus magnocellularis (NM) and the nucleus laminaris (NL). At this time, the dominant form of the receptor appeared to contain NR1 and NR2B. NR2A appeared to replace NR2B by E14, a time that coincides with synaptic refinement and evoked auditory responses. NR2C did not change greatly during auditory development, whereas NR2D increased from E10 and remained at fairly high levels into adulthood. Thus changes in NMDA NR2 receptor subunits may contribute to the development of auditory brainstem responses in the chick. PMID:17366608

  19. Abnormal Auditory Gain in Hyperacusis: Investigation with a Computational Model

    PubMed Central

    Diehl, Peter U.; Schaette, Roland

    2015-01-01

    Hyperacusis is a frequent auditory disorder that is characterized by abnormal loudness perception where sounds of relatively normal volume are perceived as too loud or even painfully loud. As hyperacusis patients show decreased loudness discomfort levels (LDLs) and steeper loudness growth functions, it has been hypothesized that hyperacusis might be caused by an increase in neuronal response gain in the auditory system. Moreover, since about 85% of hyperacusis patients also experience tinnitus, the conditions might be caused by a common mechanism. However, the mechanisms that give rise to hyperacusis have remained unclear. Here, we have used a computational model of the auditory system to investigate candidate mechanisms for hyperacusis. Assuming that perceived loudness is proportional to the summed activity of all auditory nerve (AN) fibers, the model was tuned to reproduce normal loudness perception. We then evaluated a variety of potential hyperacusis gain mechanisms by determining their effects on model equal-loudness contours and comparing the results to the LDLs of hyperacusis patients with normal hearing thresholds. Hyperacusis was best accounted for by an increase in non-linear gain in the central auditory system. Good fits to the average patient LDLs were obtained for a general increase in gain that affected all frequency channels to the same degree, and also for a frequency-specific gain increase in the high-frequency range. Moreover, the gain needed to be applied after subtraction of spontaneous activity of the AN, which is in contrast to current theories of tinnitus generation based on amplification of spontaneous activity. Hyperacusis and tinnitus might therefore be caused by different changes in neuronal processing in the central auditory system. PMID:26236277

  20. Changes in auditory memory performance following the use of frequency-modulated system in children with suspected auditory processing disorders.

    PubMed

    Umat, Cila; Mukari, Siti Z; Ezan, Nurul F; Din, Normah C

    2011-08-01

    To examine the changes in the short-term auditory memory following the use of frequency-modulated (FM) system in children with suspected auditory processing disorders (APDs), and also to compare the advantages of bilateral over unilateral FM fitting. This longitudinal study involved 53 children from Sekolah Kebangsaan Jalan Kuantan 2, Kuala Lumpur, Malaysia who fulfilled the inclusion criteria. The study was conducted from September 2007 to October 2008 in the Department of Audiology and Speech Sciences, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia. The children's age was between 7-10 years old, and they were assigned into 3 groups: 15 in the control group (not fitted with FM); 19 in the unilateral; and 19 in the bilateral FM-fitting group. Subjects wore the FM system during school time for 12 weeks. Their working memory (WM), best learning (BL), and retention of information (ROI) were measured using the Rey Auditory Verbal Learning Test at pre-fitting, post (after 12 weeks of FM usage), and at long term (one year after the usage of FM system ended). There were significant differences in the mean WM (p=0.001), BL (p=0.019), and ROI (p=0.005) scores at the different measurement times, in which the mean scores at long-term were consistently higher than at pre-fitting, despite similar performances at the baseline (p>0.05). There was no significant difference in performance between unilateral- and bilateral-fitting groups. The use of FM might give a long-term effect on improving selected short-term auditory memories of some children with suspected APDs. One may not need to use 2 FM receivers to receive advantages on auditory memory performance.

  1. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    NASA Astrophysics Data System (ADS)

    Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan

    2005-12-01

    Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.

  2. Noise-induced tinnitus: auditory evoked potential in symptomatic and asymptomatic patients.

    PubMed

    Santos-Filha, Valdete Alves Valentins dos; Samelli, Alessandra Giannella; Matas, Carla Gentile

    2014-07-01

    We evaluated the central auditory pathways in workers with noise-induced tinnitus with normal hearing thresholds, compared the auditory brainstem response results in groups with and without tinnitus and correlated the tinnitus location to the auditory brainstem response findings in individuals with a history of occupational noise exposure. Sixty individuals participated in the study and the following procedures were performed: anamnesis, immittance measures, pure-tone air conduction thresholds at all frequencies between 0.25-8 kHz and auditory brainstem response. The mean auditory brainstem response latencies were lower in the Control group than in the Tinnitus group, but no significant differences between the groups were observed. Qualitative analysis showed more alterations in the lower brainstem in the Tinnitus group. The strongest relationship between tinnitus location and auditory brainstem response alterations was detected in individuals with bilateral tinnitus and bilateral auditory brainstem response alterations compared with patients with unilateral alterations. Our findings suggest the occurrence of a possible dysfunction in the central auditory nervous system (brainstem) in individuals with noise-induced tinnitus and a normal hearing threshold.

  3. Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve

    PubMed Central

    Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.

    2015-01-01

    The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538

  4. Auditory Cortex Is Required for Fear Potentiation of Gap Detection

    PubMed Central

    Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.

    2014-01-01

    Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510

  5. Beneficial auditory and cognitive effects of auditory brainstem implantation in children.

    PubMed

    Colletti, Liliana

    2007-09-01

    This preliminary study demonstrates the development of hearing ability and shows that there is a significant improvement in some cognitive parameters related to selective visual/spatial attention and to fluid or multisensory reasoning, in children fitted with auditory brainstem implantation (ABI). The improvement in cognitive paramenters is due to several factors, among which there is certainly, as demonstrated in the literature on a cochlear implants (CIs), the activation of the auditory sensory canal, which was previously absent. The findings of the present study indicate that children with cochlear or cochlear nerve abnormalities with associated cognitive deficits should not be excluded from ABI implantation. The indications for ABI have been extended over the last 10 years to adults with non-tumoral (NT) cochlear or cochlear nerve abnormalities that cannot benefit from CI. We demonstrated that the ABI with surface electrodes may provide sufficient stimulation of the central auditory system in adults for open set speech recognition. These favourable results motivated us to extend ABI indications to children with profound hearing loss who were not candidates for a CI. This study investigated the performances of young deaf children undergoing ABI, in terms of their auditory perceptual development and their non-verbal cognitive abilities. In our department from 2000 to 2006, 24 children aged 14 months to 16 years received an ABI for different tumour and non-tumour diseases. Two children had NF2 tumours. Eighteen children had bilateral cochlear nerve aplasia. In this group, nine children had associated cochlear malformations, two had unilateral facial nerve agenesia and two had combined microtia, aural atresia and middle ear malformations. Four of these children had previously been fitted elsewhere with a CI with no auditory results. One child had bilateral incomplete cochlear partition (type II); one child, who had previously been fitted unsuccessfully elsewhere with a CI, had auditory neuropathy; one child showed total cochlear ossification bilaterally due to meningitis; and one child had profound hearing loss with cochlear fractures after a head injury. Twelve of these children had multiple associated psychomotor handicaps. The retrosigmoid approach was used in all children. Intraoperative electrical auditory brainstem responses (EABRs) and postoperative EABRs and electrical middle latency responses (EMLRs) were performed. Perceptual auditory abilities were evaluated with the Evaluation of Auditory Responses to Speech (EARS) battery - the Listening Progress Profile (LIP), the Meaningful Auditory Integration Scale (MAIS), the Meaningful Use of Speech Scale (MUSS) - and the Category of Auditory Performance (CAP). Cognitive evaluation was performed on seven children using the Leiter International Performance Scale - Revised (LIPS-R) test with the following subtests: Figure ground, Form completion, Sequential order and Repeated pattern. No postoperative complications were observed. All children consistently used their devices for >75% of waking hours and had environmental sound awareness and utterance of words and simple sentences. Their CAP scores ranged from 1 to 7 (average =4); with MAIS they scored 2-97.5% (average =38%); MUSS scores ranged from 5 to 100% (average =49%) and LIP scores from 5 to 100% (average =45%). Owing to associated disabilities, 12 children were given other therapies (e.g. physical therapy and counselling) in addition to speech and aural rehabilitation therapy. Scores for two of the four subtests of LIPS-R in this study increased significantly during the first year of auditory brainstem implant use in all seven children selected for cognitive evaluation.

  6. The utility of visual analogs of central auditory tests in the differential diagnosis of (central) auditory processing disorder and attention deficit hyperactivity disorder.

    PubMed

    Bellis, Teri James; Billiet, Cassie; Ross, Jody

    2011-09-01

    Cacace and McFarland (2005) have suggested that the addition of cross-modal analogs will improve the diagnostic specificity of (C)APD (central auditory processing disorder) by ensuring that deficits observed are due to the auditory nature of the stimulus and not to supra-modal or other confounds. Others (e.g., Musiek et al, 2005) have expressed concern about the use of such analogs in diagnosing (C)APD given the uncertainty as to the degree to which cross-modal measures truly are analogous and emphasize the nonmodularity of the CANs (central auditory nervous system) and its function, which precludes modality specificity of (C)APD. To date, no studies have examined the clinical utility of cross-modal (e.g., visual) analogs of central auditory tests in the differential diagnosis of (C)APD. This study investigated performance of children diagnosed with (C)APD, children diagnosed with ADHD (attention deficit hyperactivity disorder), and typically developing children on three diagnostic tests of central auditory function and their corresponding visual analogs. The study sought to determine whether deficits observed in the (C)APD group were restricted to the auditory modality and the degree to which the addition of visual analogs aids in the ability to differentiate among groups. An experimental repeated measures design was employed. Participants consisted of three groups of right-handed children (normal control, n=10; ADHD, n=10; (C)APD, n=7) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of disorders unrelated to their primary diagnosis. Participants in Groups 2 and 3 met current diagnostic criteria for ADHD and (C)APD. Visual analogs of three tests in common clinical use for the diagnosis of (C)APD were used (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; and Duration Patterns [Pinheiro and Musiek, 1985]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANCOVAs (analyses of covariance) were used to examine effects of group, modality, and laterality (Dichotic/Dichoptic Digits) or response condition (auditory and visual patterning). In addition, planned univariate ANCOVAs were used to examine effects of group on intratest comparison measures (REA, HLD [Humming-Labeling Differential]). Children with both ADHD and (C)APD performed more poorly overall than typically developing children on all tasks, with the (C)APD group exhibiting the poorest performance on the auditory and visual patterns tests but the ADHD and (C)APD group performing similarly on the Dichotic/Dichoptic Digits task. However, each of the auditory and visual intratest comparison measures, when taken individually, was able to distinguish the (C)APD group from both the normal control and ADHD groups, whose performance did not differ from one another. Results underscore the importance of intratest comparison measures in the interpretation of central auditory tests (American Speech-Language-Hearing Association [ASHA], 2005 ; American Academy of Audiology [AAA], 2010). Results also support the "non-modular" view of (C)APD in which cross-modal deficits would be predicted based on shared neuroanatomical substrates. Finally, this study demonstrates that auditory tests alone are sufficient to distinguish (C)APD from supra-modal disorders, with cross-modal analogs adding little if anything to the differential diagnostic process. American Academy of Audiology.

  7. Left and right reaction time differences to the sound intensity in normal and AD/HD children.

    PubMed

    Baghdadi, Golnaz; Towhidkhah, Farzad; Rostami, Reza

    2017-06-01

    Right hemisphere, which is attributed to the sound intensity discrimination, has abnormality in people with attention deficit/hyperactivity disorder (AD/HD). However, it is not studied whether the defect in the right hemisphere has influenced on the intensity sensation of AD/HD subjects or not. In this study, the sensitivity of normal and AD/HD children to the sound intensity was investigated. Nineteen normal and fourteen AD/HD children participated in the study and performed a simple auditory reaction time task. Using the regression analysis, the sensitivity of right and left ears to various sound intensity levels was examined. The statistical results showed that the sensitivity of AD/HD subjects to the intensity was lower than the normal group (p < 0.0001). Left and right pathways of the auditory system had the same pattern of response in AD/HD subjects (p > 0.05). However, in control group the left pathway was more sensitive to the sound intensity level than the right one (p = 0.0156). It can be probable that the deficit of the right hemisphere has influenced on the auditory sensitivity of AD/HD children. The possible existent deficits of other auditory system components such as middle ear, inner ear, or involved brain stem nucleuses may also lead to the observed results. The development of new biomarkers based on the sensitivity of the brain hemispheres to the sound intensity has been suggested to estimate the risk of AD/HD. Designing new technique to correct the auditory feedback has been also proposed in behavioral treatment sessions. Copyright © 2017. Published by Elsevier B.V.

  8. Auditory and communicative abilities in the auditory neuropathy spectrum disorder and mutation in the Otoferlin gene: clinical cases study.

    PubMed

    Costa, Nayara Thais de Oliveira; Martinho-Carvalho, Ana Claudia; Cunha, Maria Claudia; Lewis, Doris Ruthi

    2012-01-01

    This study had the aim to investigate the auditory and communicative abilities of children diagnosed with Auditory Neuropathy Spectrum Disorder due to mutation in the Otoferlin gene. It is a descriptive and qualitative study in which two siblings with this diagnosis were assessed. The procedures conducted were: speech perception tests for children with profound hearing loss, and assessment of communication abilities using the Behavioral Observation Protocol. Because they were siblings, the subjects in the study shared family and communicative context. However, they developed different communication abilities, especially regarding the use of oral language. The study showed that the Auditory Neuropathy Spectrum Disorder is a heterogeneous condition in all its aspects, and it is not possible to make generalizations or assume that cases with similar clinical features will develop similar auditory and communicative abilities, even when they are siblings. It is concluded that the acquisition of communicative abilities involves subjective factors, which should be investigated based on the uniqueness of each case.

  9. Reliance on auditory feedback in children with childhood apraxia of speech.

    PubMed

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P; Guarino, Anthony J; Green, Jordan R

    2015-01-01

    Children with childhood apraxia of speech (CAS) have been hypothesized to continuously monitor their speech through auditory feedback to minimize speech errors. We used an auditory masking paradigm to determine the effect of attenuating auditory feedback on speech in 30 children: 9 with CAS, 10 with speech delay, and 11 with typical development. The masking only affected the speech of children with CAS as measured by voice onset time and vowel space area. These findings provide preliminary support for greater reliance on auditory feedback among children with CAS. Readers of this article should be able to (i) describe the motivation for investigating the role of auditory feedback in children with CAS; (ii) report the effects of feedback attenuation on speech production in children with CAS, speech delay, and typical development, and (iii) understand how the current findings may support a feedforward program deficit in children with CAS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  10. How challenges in auditory fMRI led to general advancements for the field.

    PubMed

    Talavage, Thomas M; Hall, Deborah A

    2012-08-15

    In the early years of fMRI research, the auditory neuroscience community sought to expand its knowledge of the underlying physiology of hearing, while also seeking to come to grips with the inherent acoustic disadvantages of working in the fMRI environment. Early collaborative efforts between prominent auditory research laboratories and prominent fMRI centers led to development of a number of key technical advances that have subsequently been widely used to elucidate principles of auditory neurophysiology. Perhaps the key imaging advance was the simultaneous and parallel development of strategies to use pulse sequences in which the volume acquisitions were "clustered," providing gaps in which stimuli could be presented without direct masking. Such sequences have become widespread in fMRI studies using auditory stimuli and also in a range of translational research domains. This review presents the parallel stories of the people and the auditory neurophysiology research that led to these sequences. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Central Auditory Nervous System Dysfunction in Echolalic Autistic Individuals.

    ERIC Educational Resources Information Center

    Wetherby, Amy Miller; And Others

    1981-01-01

    The results showed that all the Ss had normal hearing on the monaural speech tests; however, there was indication of central auditory nervous system dysfunction in the language dominant hemisphere, inferred from the dichotic tests, for those Ss displaying echolalia. (Author)

  12. Development of the acoustically evoked behavioral response in larval plainfin midshipman fish, Porichthys notatus.

    PubMed

    Alderks, Peter W; Sisneros, Joseph A

    2013-01-01

    The ontogeny of hearing in fishes has become a major interest among bioacoustics researchers studying fish behavior and sensory ecology. Most fish begin to detect acoustic stimuli during the larval stage which can be important for navigation, predator avoidance and settlement, however relatively little is known about the hearing capabilities of larval fishes. We characterized the acoustically evoked behavioral response (AEBR) in the plainfin midshipman fish, Porichthys notatus, and used this innate startle-like response to characterize this species' auditory capability during larval development. Age and size of larval midshipman were highly correlated (r(2) = 0.92). The AEBR was first observed in larvae at 1.4 cm TL. At a size ≥ 1.8 cm TL, all larvae responded to a broadband stimulus of 154 dB re1 µPa or -15.2 dB re 1 g (z-axis). Lowest AEBR thresholds were 140-150 dB re 1 µPa or -33 to -23 dB re 1 g for frequencies below 225 Hz. Larval fish with size ranges of 1.9-2.4 cm TL had significantly lower best evoked frequencies than the other tested size groups. We also investigated the development of the lateral line organ and its function in mediating the AEBR. The lateral line organ is likely involved in mediating the AEBR but not necessary to evoke the startle-like response. The midshipman auditory and lateral line systems are functional during early development when the larvae are in the nest and the auditory system appears to have similar tuning characteristics throughout all life history stages.

  13. Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.

    PubMed

    Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S

    2013-02-01

    Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.

  14. Feasibility of a real-time hand hygiene notification machine learning system in outpatient clinics.

    PubMed

    Geilleit, R; Hen, Z Q; Chong, C Y; Loh, A P; Pang, N L; Peterson, G M; Ng, K C; Huis, A; de Korne, D F

    2018-04-09

    Various technologies have been developed to improve hand hygiene (HH) compliance in inpatient settings; however, little is known about the feasibility of machine learning technology for this purpose in outpatient clinics. To assess the effectiveness, user experiences, and costs of implementing a real-time HH notification machine learning system in outpatient clinics. In our mixed methods study, a multi-disciplinary team co-created an infrared guided sensor system to automatically notify clinicians to perform HH just before first patient contact. Notification technology effects were measured by comparing HH compliance at baseline (without notifications) with real-time auditory notifications that continued till HH was performed (intervention I) or notifications lasting 15 s (intervention II). User experiences were collected during daily briefings and semi-structured interviews. Costs of implementation of the system were calculated and compared to the current observational auditing programme. Average baseline HH performance before first patient contact was 53.8%. With real-time auditory notifications that continued till HH was performed, overall HH performance increased to 100% (P < 0.001). With auditory notifications of a maximum duration of 15 s, HH performance was 80.4% (P < 0.001). Users emphasized the relevance of real-time notification and contributed to technical feasibility improvements that were implemented in the prototype. Annual running costs for the machine learning system were estimated to be 46% lower than the observational auditing programme. Machine learning technology that enables real-time HH notification provides a promising cost-effective approach to both improving and monitoring HH, and deserves further development in outpatient settings. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  15. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  16. Classification of passive auditory event-related potentials using discriminant analysis and self-organizing feature maps.

    PubMed

    Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M

    2000-01-01

    Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.

  17. Memory for sound, with an ear toward hearing in complex auditory scenes.

    PubMed

    Snyder, Joel S; Gregg, Melissa K

    2011-10-01

    An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.

  18. Auditory and vestibular dysfunctions in systemic sclerosis: literature review.

    PubMed

    Rabelo, Maysa Bastos; Corona, Ana Paula

    2014-01-01

    To describe the prevalence of auditory and vestibular dysfunction in individuals with systemic sclerosis (SS) and the hypotheses to explain these changes. We performed a systematic review without meta-analysis from PubMed, LILACS, Web of Science, SciELO and SCOPUS databases, using a combination of keywords "systemic sclerosis AND balance OR vestibular" and "systemic sclerosis AND hearing OR auditory." We included articles published in Portuguese, Spanish, or English until December 2011 and reviews, letters, and editorials were excluded. We found 254 articles, out of which 10 were selected. The study design was described, and the characteristics and frequency of the auditory and vestibular dysfunctions in these individuals were listed. Afterwards, we investigated the hypothesis built by the authors to explain the auditory and vestibular dysfunctions in SS. Hearing loss was the most common finding, with prevalence ranging from 20 to 77%, being bilateral sensorineural the most frequent type. It is hypothesized that the hearing impairment in SS is due to vascular changes in the cochlea. The prevalence of vestibular disorders ranged from 11 to 63%, and the most frequent findings were changes in caloric testing, positional nystagmus, impaired oculocephalic response, changes in clinical tests of sensory interaction, and benign paroxysmal positional vertigo. High prevalence of auditory and vestibular dysfunctions in patients with SS was observed. Conducting further research can assist in early identification of these abnormalities, provide resources for professionals who work with these patients, and contribute to improving the quality of life of these individuals.

  19. The neural consequences of age-related hearing loss

    PubMed Central

    Peelle, Jonathan E.; Wingfield, Arthur

    2016-01-01

    During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension. PMID:27262177

  20. Cortical Development and Neuroplasticity in Auditory Neuropathy Spectrum Disorder

    PubMed Central

    Sharma, Anu; Cardon, Garrett

    2015-01-01

    Cortical development is dependent to a large extent on stimulus-driven input. Auditory Neuropathy Spectrum Disorder (ANSD) is a recently described form of hearing impairment where neural dys-synchrony is the predominant characteristic. Children with ANSD provide a unique platform to examine the effects of asynchronous and degraded afferent stimulation on cortical auditory neuroplasticity and behavioral processing of sound. In this review, we describe patterns of auditory cortical maturation in children with ANSD. The disruption of cortical maturation that leads to these various patterns includes high levels of intra-individual cortical variability and deficits in cortical phase synchronization of oscillatory neural responses. These neurodevelopmental changes, which are constrained by sensitive periods for central auditory maturation, are correlated with behavioral outcomes for children with ANSD. Overall, we hypothesize that patterns of cortical development in children with ANSD appear to be markers of the severity of the underlying neural dys-synchrony, providing prognostic indicators of success of clinical intervention with amplification and/or electrical stimulation. PMID:26070426

  1. Auditory experience controls the maturation of song discrimination and sexual response in Drosophila

    PubMed Central

    Li, Xiaodong; Ishimoto, Hiroshi

    2018-01-01

    In birds and higher mammals, auditory experience during development is critical to discriminate sound patterns in adulthood. However, the neural and molecular nature of this acquired ability remains elusive. In fruit flies, acoustic perception has been thought to be innate. Here we report, surprisingly, that auditory experience of a species-specific courtship song in developing Drosophila shapes adult song perception and resultant sexual behavior. Preferences in the song-response behaviors of both males and females were tuned by social acoustic exposure during development. We examined the molecular and cellular determinants of this social acoustic learning and found that GABA signaling acting on the GABAA receptor Rdl in the pC1 neurons, the integration node for courtship stimuli, regulated auditory tuning and sexual behavior. These findings demonstrate that maturation of auditory perception in flies is unexpectedly plastic and is acquired socially, providing a model to investigate how song learning regulates mating preference in insects. PMID:29555017

  2. Neurofeedback-Based Enhancement of Single-Trial Auditory Evoked Potentials: Treatment of Auditory Verbal Hallucinations in Schizophrenia.

    PubMed

    Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas

    2018-03-01

    Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback. Furthermore, independent of the training group, a significant spatial pre-post difference was found in the event-related component P200 ( P = .04).

  3. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  4. Auditory input modulates sleep: an intra-cochlear-implanted human model.

    PubMed

    Velluti, Ricardo A; Pedemonte, Marisa; Suárez, Hámlet; Bentancor, Claudia; Rodríguez-Servetti, Zulma

    2010-12-01

    To properly demonstrate the effect of auditory input on sleep of intra-cochlear-implanted patients, the following approach was developed. Four implanted deaf patients were recorded during four nights: two nights with the implant OFF, with no auditory input, and two nights with the implant ON, that is, with normal auditory input, being only the common night sounds present, without any additional auditory stimuli delivered. The sleep patterns of another five deaf people were used as controls, exhibiting normal sleep organization. Moreover, the four experimental patients with intra-cochlear devices and the implant OFF also showed normal sleep patterns. On comparison of the night recordings with the implant ON and OFF, a new sleep organization was observed for the recordings with the implant ON, suggesting that brain plasticity may produce changes in the sleep stage percentages while maintaining the ultradian rhythm. During sleep with the implant ON, the analysis of the electroencephalographic delta, theta and alpha bands in the frequency domain, using the Fast Fourier Transform, revealed a diversity of changes in the power originated in the contralateral cortical temporal region. Different power shifts were observed, perhaps related to the exact position of the implant inside the cochlea and the scalp electrode location. In conclusion, this pilot study shows that the auditory input in humans can introduce changes in central nervous system activity leading to shifts in sleep characteristics, as previously demonstrated in guinea pigs. We are postulating that an intra-cochlear-implanted deaf patient may have a better recovery if the implant is maintained ON during the night, that is, during sleep. © 2010 European Sleep Research Society.

  5. Contralateral Noise Stimulation Delays P300 Latency in School-Aged Children.

    PubMed

    Ubiali, Thalita; Sanfins, Milaine Dominici; Borges, Leticia Reis; Colella-Santos, Maria Francisca

    2016-01-01

    The auditory cortex modulates auditory afferents through the olivocochlear system, which innervates the outer hair cells and the afferent neurons under the inner hair cells in the cochlea. Most of the studies that investigated the efferent activity in humans focused on evaluating the suppression of the otoacoustic emissions by stimulating the contralateral ear with noise, which assesses the activation of the medial olivocochlear bundle. The neurophysiology and the mechanisms involving efferent activity on higher regions of the auditory pathway, however, are still unknown. Also, the lack of studies investigating the effects of noise on human auditory cortex, especially in peadiatric population, points to the need for recording the late auditory potentials in noise conditions. Assessing the auditory efferents in schoolaged children is highly important due to some of its attributed functions such as selective attention and signal detection in noise, which are important abilities related to the development of language and academic skills. For this reason, the aim of the present study was to evaluate the effects of noise on P300 responses of children with normal hearing. P300 was recorded in 27 children aged from 8 to 14 years with normal hearing in two conditions: with and whitout contralateral white noise stimulation. P300 latencies were significantly longer at the presence of contralateral noise. No significant changes were observed for the amplitude values. Contralateral white noise stimulation delayed P300 latency in a group of school-aged children with normal hearing. These results suggest a possible influence of the medial olivocochlear activation on P300 responses under noise condition.

  6. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  7. The Effects of Electromagnetic Fields on The Nervous System,

    DTIC Science & Technology

    Superior Cervical Ganglia: Design of Waveguide Apparatus, and Calculation of Specific Absorption Rate; Effects of Electromagnetic Fields on Muscle ... Contraction ; Effects of Electromagnetic Fields on Auditory System: Effect of Noise Masking on Threshold of Evoked Auditory Responses, Microwave-induced Cochlear Microphonics in Guinea Pigs.

  8. Reprogramming Glia Into Neurons in the Peripheral Auditory System as a Solution for Sensorineural Hearing Loss: Lessons From the Central Nervous System

    PubMed Central

    Meas, Steven J.; Zhang, Chun-Li; Dabdoub, Alain

    2018-01-01

    Disabling hearing loss affects over 5% of the world’s population and impacts the lives of individuals from all age groups. Within the next three decades, the worldwide incidence of hearing impairment is expected to double. Since a leading cause of hearing loss is the degeneration of primary auditory neurons (PANs), the sensory neurons of the auditory system that receive input from mechanosensory hair cells in the cochlea, it may be possible to restore hearing by regenerating PANs. A direct reprogramming approach can be used to convert the resident spiral ganglion glial cells into induced neurons to restore hearing. This review summarizes recent advances in reprogramming glia in the CNS to suggest future steps for regenerating the peripheral auditory system. In the coming years, direct reprogramming of spiral ganglion glial cells has the potential to become one of the leading biological strategies to treat hearing impairment. PMID:29593497

  9. Testing Convergent Evolution in Auditory Processing Genes between Echolocating Mammals and the Aye-Aye, a Percussive-Foraging Primate.

    PubMed

    Bankoff, Richard J; Jerjos, Michael; Hohman, Baily; Lauterbur, M Elise; Kistler, Logan; Perry, George H

    2017-07-01

    Several taxonomically distinct mammalian groups-certain microbats and cetaceans (e.g., dolphins)-share both morphological adaptations related to echolocation behavior and strong signatures of convergent evolution at the amino acid level across seven genes related to auditory processing. Aye-ayes (Daubentonia madagascariensis) are nocturnal lemurs with a specialized auditory processing system. Aye-ayes tap rapidly along the surfaces of trees, listening to reverberations to identify the mines of wood-boring insect larvae; this behavior has been hypothesized to functionally mimic echolocation. Here we investigated whether there are signals of convergence in auditory processing genes between aye-ayes and known mammalian echolocators. We developed a computational pipeline (Basic Exon Assembly Tool) that produces consensus sequences for regions of interest from shotgun genomic sequencing data for nonmodel organisms without requiring de novo genome assembly. We reconstructed complete coding region sequences for the seven convergent echolocating bat-dolphin genes for aye-ayes and another lemur. We compared sequences from these two lemurs in a phylogenetic framework with those of bat and dolphin echolocators and appropriate nonecholocating outgroups. Our analysis reaffirms the existence of amino acid convergence at these loci among echolocating bats and dolphins; some methods also detected signals of convergence between echolocating bats and both mice and elephants. However, we observed no significant signal of amino acid convergence between aye-ayes and echolocating bats and dolphins, suggesting that aye-aye tap-foraging auditory adaptations represent distinct evolutionary innovations. These results are also consistent with a developing consensus that convergent behavioral ecology does not reliably predict convergent molecular evolution. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  10. Hypervelocity Technology Escape System Concepts. Volume 1. Development and Evaluation

    DTIC Science & Technology

    1988-07-01

    airplane escape systems. These include separation at high dynamic pressure, stability, impact attenuation , crew member accelerations, adequate...changes (TTS; 0 Shock attenuator design PTS) 0 Restraint system design * Limb flail * Non-auditory changes (gag, dec. visual acuity) * Reduced psycho-motor...detected by ultrasonic technique. The DCS symptoms may not appear until at slightly lower total pressures (8 N psia - 9 pals). Since the pressurization

  11. Salicylate-induced cochlear impairments, cortical hyperactivity and re-tuning, and tinnitus.

    PubMed

    Chen, Guang-Di; Stolzberg, Daniel; Lobarinas, Edward; Sun, Wei; Ding, Dalian; Salvi, Richard

    2013-01-01

    High doses of sodium salicylate (SS) have long been known to induce temporary hearing loss and tinnitus, effects attributed to cochlear dysfunction. However, our recent publications reviewed here show that SS can induce profound, permanent, and unexpected changes in the cochlea and central nervous system. Prolonged treatment with SS permanently decreased the cochlear compound action potential (CAP) amplitude in vivo. In vitro, high dose SS resulted in a permanent loss of spiral ganglion neurons and nerve fibers, but did not damage hair cells. Acute treatment with high-dose SS produced a frequency-dependent decrease in the amplitude of distortion product otoacoustic emissions and CAP. Losses were greatest at low and high frequencies, but least at the mid-frequencies (10-20 kHz), the mid-frequency band that corresponds to the tinnitus pitch measured behaviorally. In the auditory cortex, medial geniculate body and amygdala, high-dose SS enhanced sound-evoked neural responses at high stimulus levels, but it suppressed activity at low intensities and elevated response threshold. When SS was applied directly to the auditory cortex or amygdala, it only enhanced sound evoked activity, but did not elevate response threshold. Current source density analysis revealed enhanced current flow into the supragranular layer of auditory cortex following systemic SS treatment. Systemic SS treatment also altered tuning in auditory cortex and amygdala; low frequency and high frequency multiunit clusters up-shifted or down-shifted their characteristic frequency into the 10-20 kHz range thereby altering auditory cortex tonotopy and enhancing neural activity at mid-frequencies corresponding to the tinnitus pitch. These results suggest that SS-induced hyperactivity in auditory cortex originates in the central nervous system, that the amygdala potentiates these effects and that the SS-induced tonotopic shifts in auditory cortex, the putative neural correlate of tinnitus, arises from the interaction between the frequency-dependent losses in the cochlea and hyperactivity in the central nervous system. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. [Auditory training in workshops: group therapy option].

    PubMed

    Santos, Juliana Nunes; do Couto, Isabel Cristina Plais; Amorim, Raquel Martins da Costa

    2006-01-01

    auditory training in groups. to verify in a group of individuals with mental retardation the efficacy of auditory training in a workshop environment. METHOD a longitudinal prospective study with 13 mentally retarded individuals from the Associação de Pais e Amigos do Excepcional (APAE) of Congonhas divided in two groups: case (n=5) and control (n=8) and who were submitted to ten auditory training sessions after verifying the integrity of the peripheral auditory system through evoked otoacoustic emissions. Participants were evaluated using a specific protocol concerning the auditory abilities (sound localization, auditory identification, memory, sequencing, auditory discrimination and auditory comprehension) at the beginning and at the end of the project. Data (entering, processing and analyses) were analyzed by the Epi Info 6.04 software. the groups did not differ regarding aspects of age (mean = 23.6 years) and gender (40% male). In the first evaluation both groups presented similar performances. In the final evaluation an improvement in the auditory abilities was observed for the individuals in the case group. When comparing the mean number of correct answers obtained by both groups in the first and final evaluations, a statistically significant result was obtained for sound localization (p=0.02), auditory sequencing (p=0.006) and auditory discrimination (p=0.03). group auditory training demonstrated to be effective in individuals with mental retardation, observing an improvement in the auditory abilities. More studies, with a larger number of participants, are necessary in order to confirm the findings of the present research. These results will help public health professionals to reanalyze the theory models used for therapy, so that they can use specific methods according to individual needs, such as auditory training workshops.

  13. Transplantation of conditionally immortal auditory neuroblasts to the auditory nerve.

    PubMed

    Sekiya, Tetsuji; Holley, Matthew C; Kojima, Ken; Matsumoto, Masahiro; Helyer, Richard; Ito, Juichi

    2007-04-01

    Cell transplantation is a realistic potential therapy for replacement of auditory sensory neurons and could benefit patients with cochlear implants or acoustic neuropathies. The procedure involves many experimental variables, including the nature and conditioning of donor cells, surgical technique and degree of degeneration in the host tissue. It is essential to control these variables in order to develop cell transplantation techniques effectively. We have characterized a conditionally immortal, mouse cell line suitable for transplantation to the auditory nerve. Structural and physiological markers defined the cells as early auditory neuroblasts that lacked neuronal, voltage-gated sodium or calcium currents and had an undifferentiated morphology. When transplanted into the auditory nerves of rats in vivo, the cells migrated peripherally and centrally and aggregated to form coherent, ectopic 'ganglia'. After 7 days they expressed beta 3-tubulin and adopted a similar morphology to native spiral ganglion neurons. They also developed bipolar projections aligned with the host nerves. There was no evidence for uncontrolled proliferation in vivo and cells survived for at least 63 days. If cells were transplanted with the appropriate surgical technique then the auditory brainstem responses were preserved. We have shown that immortal cell lines can potentially be used in the mammalian ear, that it is possible to differentiate significant numbers of cells within the auditory nerve tract and that surgery and cell injection can be achieved with no damage to the cochlea and with minimal degradation of the auditory brainstem response.

  14. Chronic stress impairs acoustic conditioning more than visual conditioning in rats: morphological and behavioural evidence.

    PubMed

    Dagnino-Subiabre, A; Terreros, G; Carmona-Fontaine, C; Zepeda, R; Orellana, J A; Díaz-Véliz, G; Mora, S; Aboitiz, F

    2005-01-01

    Chronic stress affects brain areas involved in learning and emotional responses. These alterations have been related with the development of cognitive deficits in major depression. The aim of this study was to determine the effect of chronic immobilization stress on the auditory and visual mesencephalic regions in the rat brain. We analyzed in Golgi preparations whether stress impairs the neuronal morphology of the inferior (auditory processing) and superior colliculi (visual processing). Afterward, we examined the effect of stress on acoustic and visual conditioning using an avoidance conditioning test. We found that stress induced dendritic atrophy in inferior colliculus neurons and did not affect neuronal morphology in the superior colliculus. Furthermore, stressed rats showed a stronger impairment in acoustic conditioning than in visual conditioning. Fifteen days post-stress the inferior colliculus neurons completely restored their dendritic structure, showing a high level of neural plasticity that is correlated with an improvement in acoustic learning. These results suggest that chronic stress has more deleterious effects in the subcortical auditory system than in the visual system and may affect the aversive system and fear-like behaviors. Our study opens a new approach to understand the pathophysiology of stress and stress-related disorders such as major depression.

  15. Localized Cell and Drug Delivery for Auditory Prostheses

    PubMed Central

    Hendricks, Jeffrey L.; Chikar, Jennifer A.; Crumling, Mark A.; Raphael, Yehoash; Martin, David C.

    2011-01-01

    Localized cell and drug delivery to the cochlea and central auditory pathway can improve the safety and performance of implanted auditory prostheses (APs). While generally successful, these devices have a number of limitations and adverse effects including limited tonal and dynamic ranges, channel interactions, unwanted stimulation of non-auditory nerves, immune rejection, and infections including meningitis. Many of these limitations are associated with the tissue reactions to implanted auditory prosthetic devices and the gradual degeneration of the auditory system following deafness. Strategies to reduce the insertion trauma, degeneration of target neurons, fibrous and bony tissue encapsulation, and immune activation can improve the viability of tissue required for AP function as well as improve the resolution of stimulation for reduced channel interaction and improved place-pitch and level discrimination. Many pharmaceutical compounds have been identified that promote the viability of auditory tissue and prevent inflammation and infection. Cell delivery and gene therapy have provided promising results for treating hearing loss and reversing degeneration. Currently, many clinical and experimental methods can produce extremely localized and sustained drug delivery to address AP limitations. These methods provide better control over drug concentrations while eliminating the adverse effects of systemic delivery. Many of these drug delivery techniques can be integrated into modern auditory prosthetic devices to optimize the tissue response to the implanted device and reduce the risk of infection or rejection. Together, these methods and pharmaceutical agents can be used to optimize the tissue-device interface for improved AP safety and effectiveness. PMID:18573323

  16. Anatomy, Physiology and Function of the Auditory System

    NASA Astrophysics Data System (ADS)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  17. Impact of Language on Development of Auditory-Visual Speech Perception

    ERIC Educational Resources Information Center

    Sekiyama, Kaoru; Burnham, Denis

    2008-01-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…

  18. Auditory sequence analysis and phonological skill

    PubMed Central

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.

    2012-01-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  19. Using Auditory Steady State Responses to Outline the Functional Connectivity in the Tinnitus Brain

    PubMed Central

    Schlee, Winfried; Weisz, Nathan; Bertrand, Olivier; Hartmann, Thomas; Elbert, Thomas

    2008-01-01

    Background Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. Methods and Findings Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. Conclusions To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus. PMID:19005566

  20. Development of vestibular afferent projections into the hindbrain and their central targets

    NASA Technical Reports Server (NTRS)

    Maklad, Adel; Fritzsch, Bernd

    2003-01-01

    In contrast to most other sensory systems, hardly anything is known about the neuroanatomical development of central projections of primary vestibular neurons and how their second order target neurons develop. Recent data suggest that afferent projections may develop not unlike other sensory systems, forming first the overall projection by molecular means followed by an as yet unspecified phase of activity mediated refinement. The latter aspect has not been tested critically and most molecules that guide the initial projection are unknown.The molecular and topological origin of the vestibular and cochlear nucleus neurons is also only partially understood. Auditory and vestibular nuclei form from several rhombomeres and a given rhombomere can contribute to two or more auditory or vestibular nuclei. Rhombomere compartments develop as functional subdivisions from a single column that extends from the hindbrain to the spinal cord. Suggestions are provided for the molecular origin of these columns but data on specific mutants testing these proposals are not yet available. Overall, the functional significance of both overlapping and segregated projections are not yet fully experimentally explored in mammals. Such lack of details of the adult organization compromises future developmental analysis.

  1. An Auditory BCI System for Assisting CRS-R Behavioral Assessment in Patients with Disorders of Consciousness

    NASA Astrophysics Data System (ADS)

    Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing

    2016-09-01

    The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient’s state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.

  2. An Auditory BCI System for Assisting CRS-R Behavioral Assessment in Patients with Disorders of Consciousness.

    PubMed

    Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing

    2016-09-13

    The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient's state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.

  3. Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse

    PubMed Central

    Moser, Tobias; Neef, Andreas; Khimich, Darina

    2006-01-01

    Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948

  4. Music training relates to the development of neural mechanisms of selective auditory attention.

    PubMed

    Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina

    2015-04-01

    Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. The Potential Role of the cABR in Assessment and Management of Hearing Impairment

    PubMed Central

    Anderson, Samira; Kraus, Nina

    2013-01-01

    Hearing aid technology has improved dramatically in the last decade, especially in the ability to adaptively respond to dynamic aspects of background noise. Despite these advancements, however, hearing aid users continue to report difficulty hearing in background noise and having trouble adjusting to amplified sound quality. These difficulties may arise in part from current approaches to hearing aid fittings, which largely focus on increased audibility and management of environmental noise. These approaches do not take into account the fact that sound is processed all along the auditory system from the cochlea to the auditory cortex. Older adults represent the largest group of hearing aid wearers; yet older adults are known to have deficits in temporal resolution in the central auditory system. Here we review evidence that supports the use of the auditory brainstem response to complex sounds (cABR) in the assessment of hearing-in-noise difficulties and auditory training efficacy in older adults. PMID:23431313

  6. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    PubMed

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Position-dependent patterning of spontaneous action potentials in immature cochlear inner hair cells

    PubMed Central

    Johnson, Stuart L.; Eckrich, Tobias; Kuhn, Stephanie; Zampini, Valeria; Franz, Christoph; Ranatunga, Kishani M.; Roberts, Terri P.; Masetto, Sergio; Knipper, Marlies; Kros, Corné J.; Marcotti, Walter

    2011-01-01

    Spontaneous action potential activity is crucial for mammalian sensory system development. In the auditory system, patterned firing activity has been observed in immature spiral ganglion cells and brain-stem neurons and is likely to depend on cochlear inner hair cell (IHC) action potentials. It remains uncertain whether spiking activity is intrinsic to developing IHCs and whether it shows patterning. We found that action potentials are intrinsically generated by immature IHCs of altricial rodents and that apical IHCs exhibit bursting activity as opposed to more sustained firing in basal cells. We show that the efferent neurotransmitter ACh, by fine-tuning the IHC’s resting membrane potential (Vm), is crucial for the bursting pattern in apical cells. Endogenous extracellular ATP also contributes to the Vm of apical and basal IHCs by activating SK2 channels. We hypothesize that the difference in firing pattern along the cochlea instructs the tonotopic differentiation of IHCs and auditory pathway. PMID:21572434

  8. Position-dependent patterning of spontaneous action potentials in immature cochlear inner hair cells.

    PubMed

    Johnson, Stuart L; Eckrich, Tobias; Kuhn, Stephanie; Zampini, Valeria; Franz, Christoph; Ranatunga, Kishani M; Roberts, Terri P; Masetto, Sergio; Knipper, Marlies; Kros, Corné J; Marcotti, Walter

    2011-06-01

    Spontaneous action potential activity is crucial for mammalian sensory system development. In the auditory system, patterned firing activity has been observed in immature spiral ganglion and brain-stem neurons and is likely to depend on cochlear inner hair cell (IHC) action potentials. It remains uncertain whether spiking activity is intrinsic to developing IHCs and whether it shows patterning. We found that action potentials were intrinsically generated by immature IHCs of altricial rodents and that apical IHCs showed bursting activity as opposed to more sustained firing in basal cells. We show that the efferent neurotransmitter acetylcholine fine-tunes the IHC's resting membrane potential (V(m)), and as such is crucial for the bursting pattern in apical cells. Endogenous extracellular ATP also contributes to the V(m) of apical and basal IHCs by triggering small-conductance Ca(2+)-activated K(+) (SK2) channels. We propose that the difference in firing pattern along the cochlea instructs the tonotopic differentiation of IHCs and auditory pathway.

  9. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder.

    PubMed

    Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed

    2016-03-01

    This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9-11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  10. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  11. Diminished auditory sensory gating during active auditory verbal hallucinations.

    PubMed

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy

    PubMed Central

    Skouras, Stavros; Lohmann, Gabriele

    2018-01-01

    Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142

  13. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    PubMed

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Mobile phones: influence on auditory and vestibular systems.

    PubMed

    Balbani, Aracy Pereira Silveira; Montovani, Jair Cortez

    2008-01-01

    Telecommunications systems emit radiofrequency, which is an invisible electromagnetic radiation. Mobile phones operate with microwaves (450900 MHz in the analog service, and 1,82,2 GHz in the digital service) very close to the users ear. The skin, inner ear, cochlear nerve and the temporal lobe surface absorb the radiofrequency energy. literature review on the influence of cellular phones on hearing and balance. systematic review. We reviewed papers on the influence of mobile phones on auditory and vestibular systems from Lilacs and Medline databases, published from 2000 to 2005, and also materials available in the Internet. Studies concerning mobile phone radiation and risk of developing an acoustic neuroma have controversial results. Some authors did not see evidences of a higher risk of tumor development in mobile phone users, while others report that usage of analog cellular phones for ten or more years increase the risk of developing the tumor. Acute exposure to mobile phone microwaves do not influence the cochlear outer hair cells function in vivo and in vitro, the cochlear nerve electrical properties nor the vestibular system physiology in humans. Analog hearing aids are more susceptible to the electromagnetic interference caused by digital mobile phones. there is no evidence of cochleo-vestibular lesion caused by cellular phones.

  15. How Hearing Loss Impacts Communication. Tipsheet: Serving Students Who Are Hard of Hearing

    ERIC Educational Resources Information Center

    Atcherson, Samuel R.; Johnson, Marni I.

    2009-01-01

    Hearing, or auditory processing, involves the use of many hearing skills in a single or combined fashion. The sounds that humans hear can be characterized by their intensity (loudness), frequency (pitch), and timing. Impairment of any of the auditory structures from the visible ear to the central auditory nervous system within the brain can have a…

  16. A Circuit for Motor Cortical Modulation of Auditory Cortical Activity

    PubMed Central

    Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan

    2013-01-01

    Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287

  17. No auditory experience, no tinnitus: Lessons from subjects with congenital- and acquired single-sided deafness.

    PubMed

    Lee, Sang-Yeon; Nam, Dong Woo; Koo, Ja-Won; De Ridder, Dirk; Vanneste, Sven; Song, Jae-Jin

    2017-10-01

    Recent studies have adopted the Bayesian brain model to explain the generation of tinnitus in subjects with auditory deafferentation. That is, as the human brain works in a Bayesian manner to reduce environmental uncertainty, missing auditory information due to hearing loss may cause auditory phantom percepts, i.e., tinnitus. This type of deafferentation-induced auditory phantom percept should be preceded by auditory experience because the fill-in phenomenon, namely tinnitus, is based upon auditory prediction and the resultant prediction error. For example, a recent animal study observed the absence of tinnitus in cats with congenital single-sided deafness (SSD; Eggermont and Kral, Hear Res 2016). However, no human studies have investigated the presence and characteristics of tinnitus in subjects with congenital SSD. Thus, the present study sought to reveal differences in the generation of tinnitus between subjects with congenital SSD and those with acquired SSD to evaluate the replicability of previous animal studies. This study enrolled 20 subjects with congenital SSD and 44 subjects with acquired SSD and examined the presence and characteristics of tinnitus in the groups. None of the 20 subjects with congenital SSD perceived tinnitus on the affected side, whereas 30 of 44 subjects with acquired SSD experienced tinnitus on the affected side. Additionally, there were significant positive correlations between tinnitus characteristics and the audiometric characteristics of the SSD. In accordance with the findings of the recent animal study, tinnitus was absent in subjects with congenital SSD, but relatively frequent in subjects with acquired SSD, which suggests that the development of tinnitus should be preceded by auditory experience. In other words, subjects with profound congenital peripheral deafferentation do not develop auditory phantom percepts because no auditory predictions are available from the Bayesian brain. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Differential Gene Expression During Compensatory Sprouting of Dendrites in the Auditory System of the Cricket Gryllus bimaculatus

    PubMed Central

    Horch, Hadley W.; McCarthy, Sarah S.; Johansen, Susan L.; Harris, James M.

    2013-01-01

    Neurons that lose their pre-synaptic partners due to injury usually retract or die. However, when the auditory interneurons of the cricket are denervated, dendrites respond by growing across the midline and forming novel synapses with the opposite auditory afferents. Suppression subtractive hybridization was used to detect transcriptional changes three days after denervation. This is a stage at which we demonstrate robust compensatory dendritic sprouting. While 49 unique candidates were downregulated, no sufficiently upregulated candidates were identified at this time point. Several candidates identified in this study are known to influence the translation and degradation of proteins in other systems. The potential role of these factors in the compensatory sprouting of cricket auditory interneurons in response to denervation is discussed. PMID:19453768

  19. Cortical Interactions Underlying the Production of Speech Sounds

    ERIC Educational Resources Information Center

    Guenther, Frank H.

    2006-01-01

    Speech production involves the integration of auditory, somatosensory, and motor information in the brain. This article describes a model of speech motor control in which a feedforward control system, involving premotor and primary motor cortex and the cerebellum, works in concert with auditory and somatosensory feedback control systems that…

  20. The Corticofugal Effects of Auditory Cortex Microstimulation on Auditory Nerve and Superior Olivary Complex Responses Are Mediated via Alpha-9 Nicotinic Receptor Subunit

    PubMed Central

    Aedo, Cristian; Terreros, Gonzalo; León, Alex; Delano, Paul H.

    2016-01-01

    Background and Objective The auditory efferent system is a complex network of descending pathways, which mainly originate in the primary auditory cortex and are directed to several auditory subcortical nuclei. These descending pathways are connected to olivocochlear neurons, which in turn make synapses with auditory nerve neurons and outer hair cells (OHC) of the cochlea. The olivocochlear function can be studied using contralateral acoustic stimulation, which suppresses auditory nerve and cochlear responses. In the present work, we tested the proposal that the corticofugal effects that modulate the strength of the olivocochlear reflex on auditory nerve responses are produced through cholinergic synapses between medial olivocochlear (MOC) neurons and OHCs via alpha-9/10 nicotinic receptors. Methods We used wild type (WT) and alpha-9 nicotinic receptor knock-out (KO) mice, which lack cholinergic transmission between MOC neurons and OHC, to record auditory cortex evoked potentials and to evaluate the consequences of auditory cortex electrical microstimulation in the effects produced by contralateral acoustic stimulation on auditory brainstem responses (ABR). Results Auditory cortex evoked potentials at 15 kHz were similar in WT and KO mice. We found that auditory cortex microstimulation produces an enhancement of contralateral noise suppression of ABR waves I and III in WT mice but not in KO mice. On the other hand, corticofugal modulations of wave V amplitudes were significant in both genotypes. Conclusion These findings show that the corticofugal modulation of contralateral acoustic suppressions of auditory nerve (ABR wave I) and superior olivary complex (ABR wave III) responses are mediated through MOC synapses. PMID:27195498

  1. A real-time detector system for precise timing of audiovisual stimuli.

    PubMed

    Henelius, Andreas; Jagadeesan, Sharman; Huotilainen, Minna

    2012-01-01

    The successful recording of neurophysiologic signals, such as event-related potentials (ERPs) or event-related magnetic fields (ERFs), relies on precise information of stimulus presentation times. We have developed an accurate and flexible audiovisual sensor solution operating in real-time for on-line use in both auditory and visual ERP and ERF paradigms. The sensor functions independently of the used audio or video stimulus presentation tools or signal acquisition system. The sensor solution consists of two independent sensors; one for sound and one for light. The microcontroller-based audio sensor incorporates a novel approach to the detection of natural sounds such as multipart audio stimuli, using an adjustable dead time. This aids in producing exact markers for complex auditory stimuli and reduces the number of false detections. The analog photosensor circuit detects changes in light intensity on the screen and produces a marker for changes exceeding a threshold. The microcontroller software for the audio sensor is free and open source, allowing other researchers to customise the sensor for use in specific auditory ERP/ERF paradigms. The hardware schematics and software for the audiovisual sensor are freely available from the webpage of the authors' lab.

  2. Scalable metadata environments (MDE): artistically impelled immersive environments for large-scale data exploration

    NASA Astrophysics Data System (ADS)

    West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram

    2014-02-01

    Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.

  3. Hearing after congenital deafness: central auditory plasticity and sensory deprivation.

    PubMed

    Kral, A; Hartmann, R; Tillein, J; Heid, S; Klinke, R

    2002-08-01

    The congenitally deaf cat suffers from a degeneration of the inner ear. The organ of Corti bears no hair cells, yet the auditory afferents are preserved. Since these animals have no auditory experience, they were used as a model for congenital deafness. Kittens were equipped with a cochlear implant at different ages and electro-stimulated over a period of 2.0-5.5 months using a monopolar single-channel compressed analogue stimulation strategy (VIENNA-type signal processor). Following a period of auditory experience, we investigated cortical field potentials in response to electrical biphasic pulses applied by means of the cochlear implant. In comparison to naive unstimulated deaf cats and normal hearing cats, the chronically stimulated animals showed larger cortical regions producing middle-latency responses at or above 300 microV amplitude at the contralateral as well as the ipsilateral auditory cortex. The cortex ipsilateral to the chronically stimulated ear did not show any signs of reduced responsiveness when stimulating the 'untrained' ear through a second cochlear implant inserted in the final experiment. With comparable duration of auditory training, the activated cortical area was substantially smaller if implantation had been performed at an older age of 5-6 months. The data emphasize that young sensory systems in cats have a higher capacity for plasticity than older ones and that there is a sensitive period for the cat's auditory system.

  4. Tympanal spontaneous oscillations reveal mechanisms for the control of amplified frequency in tree crickets

    NASA Astrophysics Data System (ADS)

    Mhatre, Natasha; Robert, Daniel

    2018-05-01

    Tree cricket hearing shows all the features of an actively amplified auditory system, particularly spontaneous oscillations (SOs) of the tympanal membrane. As expected from an actively amplified auditory system, SO frequency and the peak frequency in evoked responses as observed in sensitivity spectra are correlated. Sensitivity spectra also show compressive non-linearity at this frequency, i.e. a reduction in peak height and sharpness with increasing stimulus amplitude. Both SO and amplified frequency also change with ambient temperature, allowing the auditory system to maintain a filter that is matched to song frequency. In tree crickets, remarkably, song frequency varies with ambient temperature. Interestingly, active amplification has been reported to be switched ON and OFF. The mechanism of this switch is as yet unknown. In order to gain insights into this switch, we recorded and analysed SOs as the auditory system transitioned from the passive (OFF) state to the active (ON) state. We found that while SO amplitude did not follow a fixed pattern, SO frequency changed during the ON-OFF transition. SOs were first detected above noise levels at low frequencies, sometimes well below the known song frequency range (0.5-1 kHz lower). SO frequency was observed to increase over the next ˜30 minutes, in the absence of any ambient temperature change, before settling at a frequency within the range of conspecific song. We examine the frequency shift in SO spectra with temperature and during the ON/OFF transition and discuss the mechanistic implications. To our knowledge, such modulation of active auditory amplification, and its dynamics are unique amongst auditory animals.

  5. Compilation and Clinical Applicability of an Early Auditory Processing Assessment Battery for Young Children.

    ERIC Educational Resources Information Center

    Fair, Lisl; Louw, Brenda; Hugo, Rene

    2001-01-01

    This study compiled a comprehensive early auditory processing skills assessment battery and evaluated the battery to toddlers with (n=8) and without (n=9) early recurrent otitis media. The assessment battery successfully distinguished between normal and deficient early auditory processing development in the subjects. The study also found parents…

  6. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment

    PubMed Central

    PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.

    2014-01-01

    Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648

  7. Assembly of the Auditory Circuitry by a Hox Genetic Network in the Mouse Brainstem

    PubMed Central

    Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M.; Studer, Michèle

    2013-01-01

    Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem. PMID:23408898

  8. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    PubMed

    Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M; Studer, Michèle

    2013-01-01

    Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  9. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    PubMed

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  10. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    PubMed

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  11. Neural preservation underlies speech improvement from auditory deprivation in young cochlear implant recipients.

    PubMed

    Feng, Gangyi; Ingvalson, Erin M; Grieco-Calub, Tina M; Roberts, Megan Y; Ryan, Maura E; Birmingham, Patrick; Burrowes, Delilah; Young, Nancy M; Wong, Patrick C M

    2018-01-30

    Although cochlear implantation enables some children to attain age-appropriate speech and language development, communicative delays persist in others, and outcomes are quite variable and difficult to predict, even for children implanted early in life. To understand the neurobiological basis of this variability, we used presurgical neural morphological data obtained from MRI of individual pediatric cochlear implant (CI) candidates implanted younger than 3.5 years to predict variability of their speech-perception improvement after surgery. We first compared neuroanatomical density and spatial pattern similarity of CI candidates to that of age-matched children with normal hearing, which allowed us to detail neuroanatomical networks that were either affected or unaffected by auditory deprivation. This information enables us to build machine-learning models to predict the individual children's speech development following CI. We found that regions of the brain that were unaffected by auditory deprivation, in particular the auditory association and cognitive brain regions, produced the highest accuracy, specificity, and sensitivity in patient classification and the most precise prediction results. These findings suggest that brain areas unaffected by auditory deprivation are critical to developing closer to typical speech outcomes. Moreover, the findings suggest that determination of the type of neural reorganization caused by auditory deprivation before implantation is valuable for predicting post-CI language outcomes for young children.

  12. Music From the Very Beginning-A Neuroscience-Based Framework for Music as Therapy for Preterm Infants and Their Parents.

    PubMed

    Haslbeck, Friederike Barbara; Bassler, Dirk

    2018-01-01

    Human and animal studies demonstrate that early auditory experiences influence brain development. The findings are particularly crucial following preterm birth as the plasticity of auditory regions, and cortex development are heavily dependent on the quality of auditory stimulation. Brain maturation in preterm infants may be affected among other things by the overwhelming auditory environment of the neonatal intensive care unit (NICU). Conversely, auditory deprivation, (e.g., the lack of the regular intrauterine rhythms of the maternal heartbeat and the maternal voice) may also have an impact on brain maturation. Therefore, a nurturing enrichment of the auditory environment for preterm infants is warranted. Creative music therapy (CMT) addresses these demands by offering infant-directed singing in lullaby-style that is continually adapted to the neonate's needs. The therapeutic approach is tailored to the individual developmental stage, entrained to the breathing rhythm, and adapted to the subtle expressions of the newborn. Not only the therapist and the neonate but also the parents play a role in CMT. In this article, we describe how to apply music therapy in a neonatal intensive care environment to support very preterm infants and their families. We speculate that the enriched musical experience may promote brain development and we critically discuss the available evidence in support of our assumption.

  13. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    PubMed

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A physiological and behavioral system for hearing restoration with cochlear implants

    PubMed Central

    King, Julia; Shehu, Ina; Roland, J. Thomas; Svirsky, Mario A.

    2016-01-01

    Cochlear implants are neuroprosthetic devices that provide hearing to deaf patients, although outcomes are highly variable even with prolonged training and use. The central auditory system must process cochlear implant signals, but it is unclear how neural circuits adapt—or fail to adapt—to such inputs. The knowledge of these mechanisms is required for development of next-generation neuroprosthetics that interface with existing neural circuits and enable synaptic plasticity to improve perceptual outcomes. Here, we describe a new system for cochlear implant insertion, stimulation, and behavioral training in rats. Animals were first ensured to have significant hearing loss via physiological and behavioral criteria. We developed a surgical approach for multichannel (2- or 8-channel) array insertion, comparable with implantation procedures and depth in humans. Peripheral and cortical responses to stimulation were used to program the implant objectively. Animals fitted with implants learned to use them for an auditory-dependent task that assesses frequency detection and recognition in a background of environmentally and self-generated noise and ceased responding appropriately to sounds when the implant was temporarily inactivated. This physiologically calibrated and behaviorally validated system provides a powerful opportunity to study the neural basis of neuroprosthetic device use and plasticity. PMID:27281743

  15. Auditory Technology and Its Impact on Bilingual Deaf Education

    ERIC Educational Resources Information Center

    Mertes, Jennifer

    2015-01-01

    Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…

  16. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  17. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  18. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  19. Advanced Multimodal Solutions for Information Presentation

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Godfroy-Cooper, Martine

    2018-01-01

    High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a possible solution, adaptive systems have been proposed in which the information presented to the user changes as a function of taskcontext-dependent factors. However, this presupposes that adequate methods for detecting andor predicting such factors are developed. Further, research in adaptive systems for aviation suggests that they can sometimes serve to increase workload and reduce situational awareness. It will be critical to develop multimodal display guidelines that include consideration of smart systems that can select the best display method for a particular contextsituation.The scope of the current work is an analysis of potential multimodal display technologies for long duration missions and, in particular, will focus on their potential role in EVA activities. The review will address multimodal (combined visual, auditory andor tactile) displays investigated by NASA, industry, and DoD (Dept. of Defense). It also considers the need for adaptive information systems to accommodate a variety of operational contexts such as crew status (e.g., fatigue, workload level) and task environment (e.g., EVA, habitat, rover, spacecraft). Current approaches to guidelines and best practices for combining modalities for the most effective information displays are also reviewed. Potential issues in developing interface guidelines for the Exploration Information System (EIS) are briefly considered.

  20. Development of Virtual Auditory Interfaces

    DTIC Science & Technology

    2001-03-01

    reference to compare the sound in the VE with the real 4. Lessons from the Entertainment Industry world experience. The entertainment industry has...systems are currently being evaluated. even though we have the technology to create astounding The first system uses a portable Sony TCD-D8 DAT audio...data set created a system called "Fantasound" which wrapped the including sound recordings and sound measurements musical compositions and sound

  1. The influence of cochlear implants on behaviour problems in deaf children.

    PubMed

    Jiménez-Romero, Ma Salud

    2015-01-01

    This study seeks to analyse the relationship between behaviour problems in deaf children and their auditory and communication development subsequent to cochlear implantation and to examine the incidence of these problems in comparison to their hearing peers. This study uses an ex post facto prospective design with a sample of 208 Spanish children, of whom 104 were deaf subjects with cochlear implants. The first objective assesses the relationships between behaviour problems, auditory integration, and social and communication skills in the group of deaf children. The second compares the frequency and intensity of behaviour problems of the group of deaf children with their hearing peers. The correlation analysis showed a significant association between the internal index of behaviour problems and auditory integration and communication skills, such that deaf children with greater auditory and communication development had no behaviour problems. When comparing behaviour problems in deaf children versus their hearing peers, behavioural disturbances are significantly more frequent in the former. According to these findings, cochlear implants may not guarantee adequate auditory and communicative development that would normalise the behaviour of deaf children.

  2. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    PubMed

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    PubMed

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A selective impairment of perception of sound motion direction in peripheral space: A case study.

    PubMed

    Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C

    2016-01-08

    It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Multi-voxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    PubMed Central

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350

  6. Inter-individual differences in how presentation modality affects verbal learning performance in children aged 5 to 16.

    PubMed

    Meijs, Celeste; Hurks, Petra P M; Wassenberg, Renske; Feron, Frans J M; Jolles, Jelle

    2016-01-01

    This study examines inter-individual differences in how presentation modality affects verbal learning performance. Children aged 5 to 16 performed a verbal learning test within one of three presentation modalities: pictorial, auditory, or textual. The results indicated that a beneficial effect of pictures exists over auditory and textual presentation modalities and that this effect increases with age. However, this effect is only found if the information to be learned is presented once (or at most twice) and only in children above the age of 7. The results may be explained in terms of single or dual coding of information in which the phonological loop is involved. Development of the (sub)vocal rehearsal system in the phonological loop is believed to be a gradual process that begins developing around the age of 7. The developmental trajectories are similar for boys and girls. Additionally, auditory information and textual information both seemed to be processed in a similar manner, namely without labeling or recoding, leading to single coding. In contrast, pictures are assumed to be processed by the dual coding of both the visual information and a (verbal) labeling of the pictures.

  7. Planning music-based amelioration and training in infancy and childhood based on neural evidence.

    PubMed

    Huotilainen, Minna; Tervaniemi, Mari

    2018-05-04

    Music-based amelioration and training of the developing auditory system has a long tradition, and recent neuroscientific evidence supports using music in this manner. Here, we present the available evidence showing that various music-related activities result in positive changes in brain structure and function, becoming helpful for auditory cognitive processes in everyday life situations for individuals with typical neural development and especially for individuals with hearing, learning, attention, or other deficits that may compromise auditory processing. We also compare different types of music-based training and show how their effects have been investigated with neural methods. Finally, we take a critical position on the multitude of error sources found in amelioration and training studies and on publication bias in the field. We discuss some future improvements of these issues in the field of music-based training and their potential results at the neural and behavioral levels in infants and children for the advancement of the field and for a more complete understanding of the possibilities and significance of the training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  8. Auditory Implant Research at the House Ear Institute 1989–2013

    PubMed Central

    Shannon, Robert V.

    2014-01-01

    The House Ear Institute (HEI) had a long and distinguished history of auditory implant innovation and development. Early clinical innovations include being one of the first cochlear implant (CI) centers, being the first center to implant a child with a cochlear implant in the US, developing the auditory brainstem implant, and developing multiple surgical approaches and tools for Otology. This paper reviews the second stage of auditory implant research at House – in-depth basic research on perceptual capabilities and signal processing for both cochlear implants and auditory brainstem implants. Psychophysical studies characterized the loudness and temporal perceptual properties of electrical stimulation as a function of electrical parameters. Speech studies with the noise-band vocoder showed that only four bands of tonotopically arrayed information were sufficient for speech recognition, and that most implant users were receiving the equivalent of 8–10 bands of information. The noise-band vocoder allowed us to evaluate the effects of the manipulation of the number of bands, the alignment of the bands with the original tonotopic map, and distortions in the tonotopic mapping, including holes in the neural representation. Stimulation pulse rate was shown to have only a small effect on speech recognition. Electric fields were manipulated in position and sharpness, showing the potential benefit of improved tonotopic selectivity. Auditory training shows great promise for improving speech recognition for all patients. And the Auditory Brainstem Implant was developed and improved and its application expanded to new populations. Overall, the last 25 years of research at HEI helped increase the basic scientific understanding of electrical stimulation of hearing and contributed to the improved outcomes for patients with the CI and ABI devices. PMID:25449009

  9. Cortical modulation of auditory processing in the midbrain

    PubMed Central

    Bajo, Victoria M.; King, Andrew J.

    2013-01-01

    In addition to their ascending pathways that originate at the receptor cells, all sensory systems are characterized by extensive descending projections. Although the size of these connections often outweighs those that carry information in the ascending auditory pathway, we still have a relatively poor understanding of the role they play in sensory processing. In the auditory system one of the main corticofugal projections links layer V pyramidal neurons with the inferior colliculus (IC) in the midbrain. All auditory cortical fields contribute to this projection, with the primary areas providing the largest outputs to the IC. In addition to medium and large pyramidal cells in layer V, a variety of cell types in layer VI make a small contribution to the ipsilateral corticocollicular projection. Cortical neurons innervate the three IC subdivisions bilaterally, although the contralateral projection is relatively small. The dorsal and lateral cortices of the IC are the principal targets of corticocollicular axons, but input to the central nucleus has also been described in some studies and is distinctive in its laminar topographic organization. Focal electrical stimulation and inactivation studies have shown that the auditory cortex can modify almost every aspect of the response properties of IC neurons, including their sensitivity to sound frequency, intensity, and location. Along with other descending pathways in the auditory system, the corticocollicular projection appears to continually modulate the processing of acoustical signals at subcortical levels. In particular, there is growing evidence that these circuits play a critical role in the plasticity of neural processing that underlies the effects of learning and experience on auditory perception by enabling changes in cortical response properties to spread to subcortical nuclei. PMID:23316140

  10. Development of the Acoustically Evoked Behavioral Response in Larval Plainfin Midshipman Fish, Porichthys notatus

    PubMed Central

    Alderks, Peter W.; Sisneros, Joseph A.

    2013-01-01

    The ontogeny of hearing in fishes has become a major interest among bioacoustics researchers studying fish behavior and sensory ecology. Most fish begin to detect acoustic stimuli during the larval stage which can be important for navigation, predator avoidance and settlement, however relatively little is known about the hearing capabilities of larval fishes. We characterized the acoustically evoked behavioral response (AEBR) in the plainfin midshipman fish, Porichthys notatus, and used this innate startle-like response to characterize this species' auditory capability during larval development. Age and size of larval midshipman were highly correlated (r2 = 0.92). The AEBR was first observed in larvae at 1.4 cm TL. At a size ≥1.8 cm TL, all larvae responded to a broadband stimulus of 154 dB re1 µPa or −15.2 dB re 1 g (z-axis). Lowest AEBR thresholds were 140–150 dB re 1 µPa or −33 to −23 dB re 1 g for frequencies below 225 Hz. Larval fish with size ranges of 1.9–2.4 cm TL had significantly lower best evoked frequencies than the other tested size groups. We also investigated the development of the lateral line organ and its function in mediating the AEBR. The lateral line organ is likely involved in mediating the AEBR but not necessary to evoke the startle-like response. The midshipman auditory and lateral line systems are functional during early development when the larvae are in the nest and the auditory system appears to have similar tuning characteristics throughout all life history stages. PMID:24340003

  11. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects.

    PubMed

    Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi

    2018-05-16

    Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Comparison of Pre-Attentive Auditory Discrimination at Gross and Fine Difference between Auditory Stimuli.

    PubMed

    Sanju, Himanshu Kumar; Kumar, Prawin

    2016-10-01

    Introduction  Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective  The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method  Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result  Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion  The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.

  13. Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.

    PubMed

    Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P

    2005-05-01

    The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.

  14. Auditory scene analysis in school-aged children with developmental language disorders

    PubMed Central

    Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.

    2014-01-01

    Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430

  15. Subcortical functional reorganization due to early blindness

    PubMed Central

    Jiang, Fang; Fine, Ione; Watkins, Kate E.; Bridge, Holly

    2015-01-01

    Lack of visual input early in life results in occipital cortical responses to auditory and tactile stimuli. However, it remains unclear whether cross-modal plasticity also occurs in subcortical pathways. With the use of functional magnetic resonance imaging, auditory responses were compared across individuals with congenital anophthalmia (absence of eyes), those with early onset (in the first few years of life) blindness, and normally sighted individuals. We find that the superior colliculus, a “visual” subcortical structure, is recruited by the auditory system in congenital and early onset blindness. Additionally, auditory subcortical responses to monaural stimuli were altered as a result of blindness. Specifically, responses in the auditory thalamus were equally strong to contralateral and ipsilateral stimulation in both groups of blind subjects, whereas sighted controls showed stronger responses to contralateral stimulation. These findings suggest that early blindness results in substantial reorganization of subcortical auditory responses. PMID:25673746

  16. Subcortical functional reorganization due to early blindness.

    PubMed

    Coullon, Gaelle S L; Jiang, Fang; Fine, Ione; Watkins, Kate E; Bridge, Holly

    2015-04-01

    Lack of visual input early in life results in occipital cortical responses to auditory and tactile stimuli. However, it remains unclear whether cross-modal plasticity also occurs in subcortical pathways. With the use of functional magnetic resonance imaging, auditory responses were compared across individuals with congenital anophthalmia (absence of eyes), those with early onset (in the first few years of life) blindness, and normally sighted individuals. We find that the superior colliculus, a "visual" subcortical structure, is recruited by the auditory system in congenital and early onset blindness. Additionally, auditory subcortical responses to monaural stimuli were altered as a result of blindness. Specifically, responses in the auditory thalamus were equally strong to contralateral and ipsilateral stimulation in both groups of blind subjects, whereas sighted controls showed stronger responses to contralateral stimulation. These findings suggest that early blindness results in substantial reorganization of subcortical auditory responses. Copyright © 2015 the American Physiological Society.

  17. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  18. A Wearable System for Gait Training in Subjects with Parkinson's Disease

    PubMed Central

    Casamassima, Filippo; Ferrari, Alberto; Milosevic, Bojan; Ginis, Pieter; Farella, Elisabetta; Rocchi, Laura

    2014-01-01

    In this paper, a system for gait training and rehabilitation for Parkinson's disease (PD) patients in a daily life setting is presented. It is based on a wearable architecture aimed at the provision of real-time auditory feedback. Recent studies have, in fact, shown that PD patients can receive benefit from a motor therapy based on auditory cueing and feedback, as happens in traditional rehabilitation contexts with verbal instructions given by clinical operators. To this extent, a system based on a wireless body sensor network and a smartphone has been developed. The system enables real-time extraction of gait spatio-temporal features and their comparison with a patient's reference walking parameters captured in the lab under clinical operator supervision. Feedback is returned to the user in form of vocal messages, encouraging the user to keep her/his walking behavior or to correct it. This paper describes the overall concept, the proposed usage scenario and the parameters estimated for the gait analysis. It also presents, in detail, the hardware-software architecture of the system and the evaluation of system reliability by testing it on a few subjects. PMID:24686731

  19. The LAC Test: A New Look at Auditory Conceptualization and Literacy Development K-12.

    ERIC Educational Resources Information Center

    Lindamood, Charles; And Others

    The Lindamood Auditory Conceptualization (LAC) Test was constructed with the recognition that the process of decoding involves an integration of the auditory, visual, and motor senses. Requiring the manipulation of colored blocks to indicate conceptualization of test patterns spoken by the examiner, subtest 1 entails coding of identity, number,…

  20. Tinnitus. I: Auditory mechanisms: a model for tinnitus and hearing impairment.

    PubMed

    Hazell, J W; Jastreboff, P J

    1990-02-01

    A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.

  1. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder

    PubMed Central

    Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed

    2016-01-01

    Background: This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). Methods: The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9–11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. Results: The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. Conclusion: The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information. PMID:26989281

  2. Recent advances in the development and function of type II spiral ganglion neurons in the mammalian inner ear

    PubMed Central

    Zhang, Kaidi D.; Coate, Thomas M.

    2016-01-01

    In hearing, mechanically sensitive hair cells (HCs) in the cochlea release glutamate onto spiral ganglion neurons (SGNs) to relay auditory information to the central nervous system (CNS). There are two main SGN subtypes, which differ in morphology, number, synaptic targets, innervation patterns and firing properties. About 90-95% of SGNs are the type I SGNs, which make a single bouton connection with inner hair cells (IHCs) and have been well described in the canonical auditory pathway for sound detection. However, less attention has been given to the type II SGNs, which exclusively innervate outer hair cells (OHCs). In this review, we emphasize recent advances in the molecular mechanisms that control how type II SGNs develop and form connections with OHCs, and exciting new insights into the function of type II SGNs. PMID:27760385

  3. A connection between the Efferent Auditory System and Noise-Induced Tinnitus Generation. Reduced contralateral suppression of TEOAEs in patients with noise-induced tinnitus.

    PubMed

    Lalaki, Panagiota; Hatzopoulos, Stavros; Lorito, Guiscardo; Kochanek, Krzysztof; Sliwa, Lech; Skarzynski, Henryk

    2011-07-01

    Subjective tinnitus is an auditory perception that is not caused by external stimulation, its source being anywhere in the auditory system. Furthermore, evidence exists that exposure to noise alters cochlear micromechanics, either directly or through complex feed-back mechanisms, involving the medial olivocochlear efferent system. The aim of this study was to assess the role of the efferent auditory system in noise-induced tinnitus generation. Contralateral sound-activated suppression of TEOAEs was performed in a group of 28 subjects with noise-induced tinnitus (NIT) versus a group of 35 subjects with normal hearing and tinnitus, without any history of exposure to intense occupational or recreational noise (idiopathic tinnitus-IT). Thirty healthy, normally hearing volunteers were used as controls for the efferent suppression test. Suppression of the TEOAE amplitude less than 1 dB SPL was considered abnormal, giving a false positive rate of 6.7%. Eighteen out of 28 (64.3%) patients of the NIT group and 9 out of 35 (25.7%) patients of the IT group showed abnormal suppression values, which were significantly different from the controls' (p<0.0001 and p<0.045, respectively). The abnormal activity of the efferent auditory system in NIT cases might indicate that either the activity of the efferent fibers innervating the outer hair cells (OHCs) is impaired or that the damaged OHCs themselves respond abnormally to the efferent stimulation.

  4. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    ERIC Educational Resources Information Center

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  5. Dual-stream accounts bridge the gap between monkey audition and human language processing. Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by Michael Arbib

    NASA Astrophysics Data System (ADS)

    Garrod, Simon; Pickering, Martin J.

    2016-03-01

    Over the last few years there has been a resurgence of interest in dual-stream dorsal-ventral accounts of language processing [4]. This has led to recent attempts to bridge the gap between the neurobiology of primate audition and human language processing with the dorsal auditory stream assumed to underlie time-dependent (and syntactic) processing and the ventral to underlie some form of time-independent (and semantic) analysis of the auditory input [3,10]. Michael Arbib [1] considers these developments in relation to his earlier Mirror System Hypothesis about the origins of human language processing [11].

  6. Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2014-05-01

    Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.

  7. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    PubMed Central

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  8. Passive stimulation and behavioral training differentially transform temporal processing in the inferior colliculus and primary auditory cortex

    PubMed Central

    Beitel, Ralph E.; Schreiner, Christoph E.; Leake, Patricia A.

    2016-01-01

    In profoundly deaf cats, behavioral training with intracochlear electric stimulation (ICES) can improve temporal processing in the primary auditory cortex (AI). To investigate whether similar effects are manifest in the auditory midbrain, ICES was initiated in neonatally deafened cats either during development after short durations of deafness (8 wk of age) or in adulthood after long durations of deafness (≥3.5 yr). All of these animals received behaviorally meaningless, “passive” ICES. Some animals also received behavioral training with ICES. Two long-deaf cats received no ICES prior to acute electrophysiological recording. After several months of passive ICES and behavioral training, animals were anesthetized, and neuronal responses to pulse trains of increasing rates were recorded in the central (ICC) and external (ICX) nuclei of the inferior colliculus. Neuronal temporal response patterns (repetition rate coding, minimum latencies, response precision) were compared with results from recordings made in the AI of the same animals (Beitel RE, Vollmer M, Raggio MW, Schreiner CE. J Neurophysiol 106: 944–959, 2011; Vollmer M, Beitel RE. J Neurophysiol 106: 2423–2436, 2011). Passive ICES in long-deaf cats remediated severely degraded temporal processing in the ICC and had no effects in the ICX. In contrast to observations in the AI, behaviorally relevant ICES had no effects on temporal processing in the ICC or ICX, with the single exception of shorter latencies in the ICC in short-deaf cats. The results suggest that independent of deafness duration passive stimulation and behavioral training differentially transform temporal processing in auditory midbrain and cortex, and primary auditory cortex emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf cat. NEW & NOTEWORTHY Behaviorally relevant vs. passive electric stimulation of the auditory nerve differentially affects neuronal temporal processing in the central nucleus of the inferior colliculus (ICC) and the primary auditory cortex (AI) in profoundly short-deaf and long-deaf cats. Temporal plasticity in the ICC depends on a critical amount of electric stimulation, independent of its behavioral relevance. In contrast, the AI emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf auditory system. PMID:27733594

  9. Intracochlear Drug Delivery Systems

    PubMed Central

    Borenstein, Jeffrey T.

    2011-01-01

    Introduction Advances in molecular biology and in the basic understanding of the mechanisms associated with sensorineural hearing loss and other diseases of the inner ear, are paving the way towards new approaches for treatments for millions of patients. However, the cochlea is a particularly challenging target for drug therapy, and new technologies will be required to provide safe and efficacious delivery of these compounds. Emerging delivery systems based on microfluidic technologies are showing promise as a means for direct intracochlear delivery. Ultimately, these systems may serve as a means for extended delivery of regenerative compounds to restore hearing in patients suffering from a host of auditory diseases. Areas covered in this review Recent progress in the development of drug delivery systems capable of direct intracochlear delivery is reviewed, including passive systems such as osmotic pumps, active microfluidic devices, and systems combined with currently available devices such as cochlear implants. The aim of this article is to provide a concise review of intracochlear drug delivery systems currently under development, and ultimately capable of being combined with emerging therapeutic compounds for the treatment of inner ear diseases. Expert Opinion Safe and efficacious treatment of auditory diseases will require the development of microscale delivery devices, capable of extended operation and direct application to the inner ear. These advances will require miniaturization and integration of multiple functions, including drug storage, delivery, power management and sensing, ultimately enabling closed-loop control and timed-sequence delivery devices for treatment of these diseases. PMID:21615213

  10. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface.

    PubMed

    Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin

    2012-01-01

    Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.

  11. Strategy Choice Mediates the Link between Auditory Processing and Spelling

    PubMed Central

    Kwong, Tru E.; Brachman, Kyle J.

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities. PMID:25198787

  12. Strategy choice mediates the link between auditory processing and spelling.

    PubMed

    Kwong, Tru E; Brachman, Kyle J

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.

  13. Learning, neural plasticity and sensitive periods: implications for language acquisition, music training and transfer across the lifespan

    PubMed Central

    White, Erin J.; Hutka, Stefanie A.; Williams, Lynne J.; Moreno, Sylvain

    2013-01-01

    Sensitive periods in human development have often been proposed to explain age-related differences in the attainment of a number of skills, such as a second language (L2) and musical expertise. It is difficult to reconcile the negative consequence this traditional view entails for learning after a sensitive period with our current understanding of the brain’s ability for experience-dependent plasticity across the lifespan. What is needed is a better understanding of the mechanisms underlying auditory learning and plasticity at different points in development. Drawing on research in language development and music training, this review examines not only what we learn and when we learn it, but also how learning occurs at different ages. First, we discuss differences in the mechanism of learning and plasticity during and after a sensitive period by examining how language exposure versus training forms language-specific phonetic representations in infants and adult L2 learners, respectively. Second, we examine the impact of musical training that begins at different ages on behavioral and neural indices of auditory and motor processing as well as sensorimotor integration. Third, we examine the extent to which childhood training in one auditory domain can enhance processing in another domain via the transfer of learning between shared neuro-cognitive systems. Specifically, we review evidence for a potential bi-directional transfer of skills between music and language by examining how speaking a tonal language may enhance music processing and, conversely, how early music training can enhance language processing. We conclude with a discussion of the role of attention in auditory learning for learning during and after sensitive periods and outline avenues of future research. PMID:24312022

  14. The Relationship of Neurogenesis and Growth of Brain Regions to Song Learning

    ERIC Educational Resources Information Center

    Kirn, John R.

    2010-01-01

    Song learning, maintenance and production require coordinated activity across multiple auditory, sensory-motor, and neuromuscular structures. Telencephalic components of the sensory-motor circuitry are unique to avian species that engage in song learning. The song system shows protracted development that begins prior to hatching but continues well…

  15. Process Deficits in Learning Disabled Children and Implications for Reading.

    ERIC Educational Resources Information Center

    Johnson, Doris J.

    An exploration of specific deficits of learning disabled children, especially in the auditory system, is presented in this paper. Disorders of attention, perception, phonemic and visual discrimination, memory, and symbolization and conceptualization are considered. The paper develops several questions for teachers of learning disabled children to…

  16. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain.

    PubMed

    Higgins, Irina; Stringer, Simon; Schnupp, Jan

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.

  17. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain

    PubMed Central

    Stringer, Simon

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable. PMID:28797034

  18. Central Nervous Activity upon Systemic Salicylate Application in Animals with Kanamycin-Induced Hearing Loss - A Manganese-Enhanced MRI (MEMRI) Study

    PubMed Central

    Gröschel, Moritz; Götze, Romy; Müller, Susanne; Ernst, Arne; Basta, Dietmar

    2016-01-01

    This study investigated the effect of systemic salicylate on central auditory and non-auditory structures in mice. Since cochlear hair cells are known to be one major target of salicylate, cochlear effects were reduced by using kanamycin to remove or impair hair cells. Neuronal brain activity was measured using the non-invasive manganese-enhanced magnetic resonance imaging technique. For all brain structures investigated, calcium-related neuronal activity was increased following systemic application of a sodium salicylate solution: probably due to neuronal hyperactivity. In addition, it was shown that the central effect of salicylate was not limited to the auditory system. A general alteration of calcium-related activity was indicated by an increase in manganese accumulation in the preoptic area of the anterior hypothalamus, as well as in the amygdala. The present data suggest that salicylate-induced activity changes in the auditory system differ from those shown in studies of noise trauma. Since salicylate action is reversible, central pharmacological effects of salicylate compared to those of (permanent) noise-induced hearing impairment and tinnitus might induce different pathophysiologies. These should therefore, be treated as different causes with the same symptoms. PMID:27078034

  19. Auditory Scene Analysis: An Attention Perspective

    PubMed Central

    2017-01-01

    Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618 PMID:29049599

  20. The role of audition in early psychic development, with special reference to the use of the pull-toy in the separation-individuation phase.

    PubMed

    Shopper, M

    1978-01-01

    The role of audition as an important perceptual modality in early psychic development has been neglected. Some reasons for this neglect are suggested. In the development of psychoanalytic technique, the analyst has changed from a "tactile presence" to a "visual presence," then finally, with the analyst positioning himself behind the couch, to an "auditory presence." Several clinical examples from analytic patients as well as child development in normal and deaf children provide instances of each type of perceptual "presence." It is suggested that, in evaluating analyzability, analysis requires a specific ego ability, namely, tolerance for the analyst as an "auditory presence." It is emphasized that some patients, for reasons of development, constitution, and/or significant stress (separation), cannot work with the analyst as an "auditory presence," but regress to the analyst as a "visual" or "tactile" presence. The importance of audition in early mother/stranger differentiations, and in the peek-a-boo game, is a developmental precursor to the use of audition as a contact modality in the separation and individuation phase. Audition permits active locomotion and separation from tactile and visual contact modalities between toddler and mother, while at the same time maintaining contact via their respective "auditory presence" for each other. The utilization of the pull-toy in mastering the conflicts of the separation-individuation phase is demonstrated. The pull-toy is heir to the teddy bear and ancestor to the tricycle. Greater attentiveness to the auditory perceptual modality may help us understand developmental phenomenon, better evaluate the potential analysand, and clarify clinical problems of audition occurring in dreams and those areas of psychopathology having to do with auditory phenomena. The more refined tripartite conept of "presence" as it relates to the predominant perceptual modality--tactile, visual, auditory--is felt to be a useful conceptualization for both developmental and clinical understanding.

  1. The development of interactive multimedia based on auditory, intellectually, repetition in repetition algorithm learning to increase learning outcome

    NASA Astrophysics Data System (ADS)

    Munir; Sutarno, H.; Aisyah, N. S.

    2018-05-01

    This research aims to find out how the development of interactive multimedia based on auditory, intellectually, and repetition can improve student learning outcomes. This interactive multimedia is developed through 5 stages. Analysis stages include the study of literature, questionnaire, interviews and observations. The design phase is done by the database design, flowchart, storyboards and repetition algorithm material while the development phase is done by the creation of web-based framework. Presentation material is adapted to the model of learning such as auditory, intellectually, repetition. Auditory points are obtained by recording the narrative material that presented by a variety of intellectual points. Multimedia as a product is validated by material and media experts. Implementation phase conducted on grade XI-TKJ2 SMKN 1 Garut. Based on index’s gain, an increasing of student learning outcomes in this study is 0.46 which is fair due to interest of student in using interactive multimedia. While the multimedia assessment earned 84.36% which is categorized as very well.

  2. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    PubMed

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Song decrystallization in adult zebra finches does not require the song nucleus NIf.

    PubMed

    Roy, Arani; Mooney, Richard

    2009-08-01

    In adult male zebra finches, transecting the vocal nerve causes previously stable (i.e., crystallized) song to slowly degrade, presumably because of the resulting distortion in auditory feedback. How and where distorted feedback interacts with song motor networks to induce this process of song decrystallization remains unknown. The song premotor nucleus HVC is a potential site where auditory feedback signals could interact with song motor commands. Although the forebrain nucleus interface of the nidopallium (NIf) appears to be the primary auditory input to HVC, NIf lesions made in adult zebra finches do not trigger song decrystallization. One possibility is that NIf lesions do not interfere with song maintenance, but do compromise the adult zebra finch's ability to express renewed vocal plasticity in response to feedback perturbations. To test this idea, we bilaterally lesioned NIf and then transected the vocal nerve in adult male zebra finches. We found that bilateral NIf lesions did not prevent nerve section-induced song decrystallization. To test the extent to which the NIf lesions disrupted auditory processing in the song system, we made in vivo extracellular recordings in HVC and a downstream anterior forebrain pathway (AFP) in NIf-lesioned birds. We found strong and selective auditory responses to the playback of the birds' own song persisted in HVC and the AFP following NIf lesions. These findings suggest that auditory inputs to the song system other than NIf, such as the caudal mesopallium, could act as a source of auditory feedback signals to the song motor network.

  4. Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.

    PubMed

    Loria, Tristan; de Grosbois, John; Tremblay, Luc

    2016-09-01

    At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.

  5. Auditory psychophysics and perception.

    PubMed

    Hirsh, I J; Watson, C S

    1996-01-01

    In this review of auditory psychophysics and perception, we cite some important books, research monographs, and research summaries from the past decade. Within auditory psychophysics, we have singled out some topics of current importance: Cross-Spectral Processing, Timbre and Pitch, and Methodological Developments. Complex sounds and complex listening tasks have been the subject of new studies in auditory perception. We review especially work that concerns auditory pattern perception, with emphasis on temporal aspects of the patterns and on patterns that do not depend on the cognitive structures often involved in the perception of speech and music. Finally, we comment on some aspects of individual difference that are sufficiently important to question the goal of characterizing auditory properties of the typical, average, adult listener. Among the important factors that give rise to these individual differences are those involved in selective processing and attention.

  6. Maturation of neurotransmission in the developing rat cochlea: immunohistochemical evidence from differential expression of synaptophysin and synaptobrevin 2

    PubMed Central

    He, S.; Yang, J.

    2011-01-01

    Synaptophysin and synaptobrevin 2 associate closely with packaging and storage of synaptic vesicles and transmitter release, and both play important roles in the development of rat cochlea. We examined the differential expression of synaptophysin and synaptobrevin 2 in the developing Sprague-Dawley rat cochlea, and investigated the relationship between their expression and auditory development. The expression of synaptophysin and synaptobrevin 2 was not observed in Kolliker's and Corti's organ at postnatal 1 day (P1) and P5, and the top turn of the cochlea at P10. Expression was detected in the outer spiral bundle (OSB), the inner spiral bundle (ISB), and the medial wall of the Deiters' cell of the cochlea at P14, and P28, and in the middle or the basal turn of Corti's organ at P10. Synaptobrevin 2 was expressed in the top of the inner hair cells (IHCs) in Corti's organ of both P14 and P28 rats. All spiral ganglion neurons (SGNs) were stained at all ages examined. The localization of synaptophysin and synaptobrevin 2 in the cochlea was closely associated with the distribution of nerve fibers and neural activity (the docking and release of synaptic vesicles). Synaptophysin and synaptobrevin 2 were expressed in a dynamic manner during the development of rat cochlea. Their expression differences during the development were in favor of the configuration course constructed between nerve endings and target cells. It also played a key role in the formation of the correct coding of auditory information during auditory system development. PMID:21556117

  7. Auditory processing and speech perception in children with specific language impairment: relations with oral language and literacy skills.

    PubMed

    Vandewalle, Ellen; Boets, Bart; Ghesquière, Pol; Zink, Inge

    2012-01-01

    This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay (n = 8), (2) children with SLI and normal literacy (n = 10) and (3) typically developing children (n = 14). Moreover, the relations between these auditory processing and speech perception skills and oral language and literacy skills in grade 1 and grade 3 were analyzed. The SLI group with literacy delay scored significantly lower than both other groups on speech perception, but not on temporal auditory processing. Both normal reading groups did not differ in terms of speech perception or auditory processing. Speech perception was significantly related to reading and spelling in grades 1 and 3 and had a unique predictive contribution to reading growth in grade 3, even after controlling reading level, phonological ability, auditory processing and oral language skills in grade 1. These findings indicated that speech perception also had a unique direct impact upon reading development and not only through its relation with phonological awareness. Moreover, speech perception seemed to be more associated with the development of literacy skills and less with oral language ability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Altered Brain Functional Activity in Infants with Congenital Bilateral Severe Sensorineural Hearing Loss: A Resting-State Functional MRI Study under Sedation.

    PubMed

    Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen

    2017-01-01

    Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.

  9. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation

    PubMed Central

    Scarfe, Amy C.; Moore, Brian C. J.; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound. PMID:28407000

  10. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    PubMed

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  11. P300 in individuals with sensorineural hearing loss.

    PubMed

    Reis, Ana Cláudia Mirandola Barbosa; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima; Garcia, Cristiane Fregonesi Dutra; Funayama, Carolina Araújo Rodrigues; Iório, Maria Cecília Martinelli

    2015-01-01

    Behavioral and electrophysiological auditory evaluations contribute to the understanding of the auditory system and of the process of intervention. To study P300 in subjects with severe or profound sensorineural hearing loss. This was a descriptive cross-sectional prospective study. It included 29 individuals of both genders with severe or profound sensorineural hearing loss without other type of disorders, aged 11 to 42 years; all were assessed by behavioral audiological evaluation and auditory evoked potentials. A recording of the P3 wave was obtained in 17 individuals, with a mean latency of 326.97ms and mean amplitude of 3.76V. There were significant differences in latency in relation to age and in amplitude according to degree of hearing loss. There was a statistically significant association of the P300 results with the degrees of hearing loss (p=0.04), with the predominant auditory communication channels (p<0.0001), and with time of hearing loss. P300 can be recorded in individuals with severe and profound congenital sensorineural hearing loss; it may contribute to the understanding of cortical development and is a good predictor of the early intervention outcome. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  12. A biologically plausible computational model for auditory object recognition.

    PubMed

    Larson, Eric; Billimoria, Cyrus P; Sen, Kamal

    2009-01-01

    Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.

  13. B.F. Skinner and the auditory inkblot: The rise and fall of the verbal summator as a projective technique.

    PubMed

    Rutherford, Alexandra

    2003-11-01

    Behaviorist B.F. Skinner is not typically associated with the fields of personality assessment or projective testing. However, early in his career Skinner developed an instrument he named the verbal summator, which, at one point, he referred to as a device for "snaring out complexes," much like an auditory analogue of the Rorschach inkblots. Skinner's interest in the projective potential of his technique was relatively short lived, but whereas he used the verbal summator to generate experimental data for his theory of verbal behavior, several other clinicians and researchers exploited this potential and adapted the verbal summator technique for both research and applied purposes. The idea of an auditory inkblot struck many as a useful innovation, and the verbal summator spawned the tautophone test, the auditory apperception test, and the Azzageddi test, among others. This article traces the origin, development, and eventual demise of the verbal summator as an auditory projective technique.

  14. Technical aspects of a demonstration tape for three-dimensional sound displays

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    This document was developed to accompany an audio cassette that demonstrates work in three-dimensional auditory displays, developed at the Ames Research Center Aerospace Human Factors Division. It provides a text version of the audio material, and covers the theoretical and technical issues of spatial auditory displays in greater depth than on the cassette. The technical procedures used in the production of the audio demonstration are documented, including the methods for simulating rotorcraft radio communication, synthesizing auditory icons, and using the Convolvotron, a real-time spatialization device.

  15. One Year of Musical Training Affects Development of Auditory Cortical-Evoked Fields in Young Children

    ERIC Educational Resources Information Center

    Fujioka, Takako; Ross, Bernhard; Kakigi, Ryusuke; Pantev, Christo; Trainor, Laurel J.

    2006-01-01

    Auditory evoked responses to a violin tone and a noise-burst stimulus were recorded from 4- to 6-year-old children in four repeated measurements over a 1-year period using magnetoencephalography (MEG). Half of the subjects participated in musical lessons throughout the year; the other half had no music lessons. Auditory evoked magnetic fields…

  16. New HRCT-based measurement of the human outer ear canal as a basis for acoustical methods.

    PubMed

    Grewe, Johanna; Thiele, Cornelia; Mojallal, Hamidreza; Raab, Peter; Sankowsky-Rothe, Tobias; Lenarz, Thomas; Blau, Matthias; Teschner, Magnus

    2013-06-01

    As the form and size of the external auditory canal determine its transmitting function and hence the sound pressure in front of the eardrum, it is important to understand its anatomy in order to develop, optimize, and compare acoustical methods. High-resolution computed tomography (HRCT) data were measured retrospectively for 100 patients who had received a cochlear implant. In order to visualize the anatomy of the auditory canal, its length, radius, and the angle at which it runs were determined for the patients’ right and left ears. The canal’s volume was calculated, and a radius function was created. The determined length of the auditory canal averaged 23.6 mm for the right ear and 23.5 mm for the left ear. The calculated auditory canal volume (Vtotal) was 0.7 ml for the right ear and 0.69 ml for the left ear. The auditory canal was found to be significantly longer in men than in women, and the volume greater. The values obtained can be employed to develop a method that represents the shape of the auditory canal as accurately as possible to allow the best possible outcomes for hearing aid fitting.

  17. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music.

    PubMed

    Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.

  18. Implantable Neural Interfaces for Sharks

    DTIC Science & Technology

    2007-05-01

    technology for recording and stimulating from the auditory and olfactory sensory nervous systems of the awake , swimming nurse shark, G. cirratum (Figures...and awake animals. Finally, evidence exists that microstimulation of the olfactory system could lead to patterned behavioral responses in the...auditory-evoked local field potentials (multi- modal sensory responses) from both anesthetized and awake animals. Figure 1: Pre-operative MR

  19. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  20. Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds

    PubMed Central

    Prather, Jonathan F.

    2013-01-01

    Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717

  1. Musical Experience, Sensorineural Auditory Processing, and Reading Subskills in Adults.

    PubMed

    Tichko, Parker; Skoe, Erika

    2018-04-27

    Developmental research suggests that sensorineural auditory processing, reading subskills (e.g., phonological awareness and rapid naming), and musical experience are related during early periods of reading development. Interestingly, recent work suggests that these relations may extend into adulthood, with indices of sensorineural auditory processing relating to global reading ability. However, it is largely unknown whether sensorineural auditory processing relates to specific reading subskills, such as phonological awareness and rapid naming, as well as musical experience in mature readers. To address this question, we recorded electrophysiological responses to a repeating click (auditory stimulus) in a sample of adult readers. We then investigated relations between electrophysiological responses to sound, reading subskills, and musical experience in this same set of adult readers. Analyses suggest that sensorineural auditory processing, reading subskills, and musical experience are related in adulthood, with faster neural conduction times and greater musical experience associated with stronger rapid-naming skills. These results are similar to the developmental findings that suggest reading subskills are related to sensorineural auditory processing and musical experience in children.

  2. Music and You: Teacher Guide.

    ERIC Educational Resources Information Center

    Semanchik, Karen

    This teacher's guide presents a course for training hearing-impaired students to listen to, create, and perform music. It emphasizes development of individual skills and group participation, encouraging students to contribute a wide variety of auditory and musical abilities and experiences while developing auditory acuity and attention. A variety…

  3. [Fragile X syndrome with Dandy-Walker variant: a clinical study of oral and written communicative manifestations].

    PubMed

    Lamônica, Dionísia Aparecida Cusin; Ferraz, Plínio Marcos Duarte Pinto; Ferreira, Amanda Tragueta; Prado, Lívia Maria do; Abramides, Dagma Venturini Marquez; Gejão, Mariana Germano

    2011-01-01

    The Fragile X syndrome is the most frequent cause of inherited intellectual disability. The Dandy-Walker variant is a specific constellation of neuroradiological findings. The present study reports oral and written communication findings in a 15-year-old boy with clinical and molecular diagnosis of Fragile X syndrome and neuroimaging findings consistent with Dandy-Walker variant. The speech-language pathology and audiology evaluation was carried out using the Communicative Behavior Observation, the Phonology assessment of the ABFW - Child Language Test, the Phonological Abilities Profile, the Test of School Performance, and the Illinois Test of Psycholinguistic Abilities. Stomatognathic system and hearing assessments were also performed. It was observed: phonological, semantic, pragmatic and morphosyntactic deficits in oral language; deficits in psycholinguistic abilities (auditory reception, verbal expression, combination of sounds, auditory and visual sequential memory, auditory closure, auditory and visual association); and morphological and functional alterations in the stomatognathic system. Difficulties in decoding the graphical symbols were observed in reading. In writing, the subject presented omissions, agglutinations and multiple representations with the predominant use of vowels, besides difficulties in visuo-spatial organization. In mathematics, in spite of the numeric recognition, the participant didn't accomplish arithmetic operations. No alterations were observed in the peripheral hearing evaluation. The constellation of behavioral, cognitive, linguistic and perceptual symptoms described for Fragile X syndrome, in addition to the structural central nervous alterations observed in the Dandy-Walker variant, caused outstanding interferences in the development of communicative abilities, in reading and writing learning, and in the individual's social integration.

  4. Entrainment to an auditory signal: Is attention involved?

    PubMed

    Kunert, Richard; Jongman, Suzanne R

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  6. Recent advances in exploring the neural underpinnings of auditory scene perception

    PubMed Central

    Snyder, Joel S.; Elhilali, Mounya

    2017-01-01

    Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds—and conventional behavioral techniques—to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the past few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field. PMID:28199022

  7. Speech Evoked Auditory Brainstem Response in Stuttering

    PubMed Central

    Tahaei, Ali Akbar; Ashayeri, Hassan; Pourbakht, Akram; Kamali, Mohammad

    2014-01-01

    Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS) at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency. PMID:25215262

  8. Do informal musical activities shape auditory skill development in preschool-age children?

    PubMed

    Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari

    2013-08-29

    The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children.

  9. Do informal musical activities shape auditory skill development in preschool-age children?

    PubMed Central

    Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari

    2013-01-01

    The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children. PMID:24009597

  10. Contrast Gain Control in Auditory Cortex

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D.B.; Schnupp, Jan W.H.; King, Andrew J.

    2011-01-01

    Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds. PMID:21689603

  11. Effect of delayed auditory feedback on normal speakers at two speech rates

    NASA Astrophysics Data System (ADS)

    Stuart, Andrew; Kalinowski, Joseph; Rastatter, Michael P.; Lynch, Kerry

    2002-05-01

    This study investigated the effect of short and long auditory feedback delays at two speech rates with normal speakers. Seventeen participants spoke under delayed auditory feedback (DAF) at 0, 25, 50, and 200 ms at normal and fast rates of speech. Significantly two to three times more dysfluencies were displayed at 200 ms (p<0.05) relative to no delay or the shorter delays. There were significantly more dysfluencies observed at the fast rate of speech (p=0.028). These findings implicate the peripheral feedback system(s) of fluent speakers for the disruptive effects of DAF on normal speech production at long auditory feedback delays. Considering the contrast in fluency/dysfluency exhibited between normal speakers and those who stutter at short and long delays, it appears that speech disruption of normal speakers under DAF is a poor analog of stuttering.

  12. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    PubMed Central

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the bird's auditory experience in females. PMID:26578918

  13. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    PubMed

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the bird's auditory experience in females.

  14. Virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    A 3D auditory display can potentially enhance information transfer by combining directional and iconic information in a quite naturalistic representation of dynamic objects in the interface. Another aspect of auditory spatial clues is that, in conjunction with other modalities, it can act as a potentiator of information in the display. For example, visual and auditory cues together can reinforce the information content of the display and provide a greater sense of presence or realism in a manner not readily achievable by either modality alone. This phenomenon will be particularly useful in telepresence applications, such as advanced teleconferencing environments, shared electronic workspaces, and monitoring telerobotic activities in remote or hazardous situations. Thus, the combination of direct spatial cues with good principles of iconic design could provide an extremely powerful and information-rich display which is also quite easy to use. An alternative approach, recently developed at ARC, generates externalized, 3D sound cues over headphones in realtime using digital signal processing. Here, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTF's) measured in the two ear-canals of individual subjects. Other similar approaches include an analog system developed by Loomis, et. al., (1990) and digital systems which make use of transforms derived from normative mannikins and simulations of room acoustics. Such an interface also requires the careful psychophysical evaluation of listener's ability to accurately localize the virtual or synthetic sound sources. From an applied standpoint, measurement of each potential listener's HRTF's may not be possible in practice. For experienced listeners, localization performance was only slightly degraded compared to a subject's inherent ability. Alternatively, even inexperienced listeners may be able to adapt to a particular set of HRTF's as long as they provide adequate cues for localization. In general, these data suggest that most listeners can obtain useful directional information from an auditory display without requiring the use of individually-tailored HRTF's.

  15. Sensation seeking, augmenting-reducing, and absolute auditory threshold: a strength-of-the-nervous-system perspective.

    PubMed

    Goldman, D; Kohn, P M; Hunt, R W

    1983-08-01

    The following measures were obtained from 42 student volunteers: the General and the Disinhibition subscales of the Sensation Seeking Scale (Form IV), the Reducer-Augmenter Scale, and the Absolute Auditory Threshold. General sensation seeking correlated significantly with the Reducer-Augmenter Scale, r(40) = .59, p less than .001, and the Absolute Auditory Threshold, r(40) = .45, p less than .005. Both results proved general across sex. These findings, that high-sensation seekers tend to be reducers and to lack sensitivity to weak stimulation, were interpreted as supporting strength-of-the-nervous-system theory more than the formulation of Zuckerman and his associates.

  16. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    PubMed

    Wengenroth, Martina; Blatow, Maria; Bendszus, Martin; Schneider, Peter

    2010-08-23

    Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  17. Representation of particle motion in the auditory midbrain of a developing anuran.

    PubMed

    Simmons, Andrea Megela

    2015-07-01

    In bullfrog tadpoles, a "deaf period" of lessened responsiveness to the pressure component of sounds, evident during the end of the late larval period, has been identified in the auditory midbrain. But coding of underwater particle motion in the vestibular medulla remains stable over all of larval development, with no evidence of a "deaf period." Neural coding of particle motion in the auditory midbrain was assessed to determine if a "deaf period" for this mode of stimulation exists in this brain area in spite of its absence from the vestibular medulla. Recording sites throughout the developing laminar and medial principal nuclei show relatively stable thresholds to z-axis particle motion, up until the "deaf period." Thresholds then begin to increase from this point up through the rest of metamorphic climax, and significantly fewer responsive sites can be located. The representation of particle motion in the auditory midbrain is less robust during later compared to earlier larval stages, overlapping with but also extending beyond the restricted "deaf period" for pressure stimulation. The decreased functional representation of particle motion in the auditory midbrain throughout metamorphic climax may reflect ongoing neural reorganization required to mediate the transition from underwater to amphibious life.

  18. Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions

    PubMed Central

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967

  19. Design of Training Systems, Phase II-A Report. An Educational Technology Assessment Model (ETAM)

    DTIC Science & Technology

    1975-07-01

    34format" for the perceptual tasks. This is applicable to auditory as well as visual tasks. Student Participation in Learning Route. When a student enters...skill formats Skill training 05.05 Vehicle properties Instructional functions: Type of stimulus presented to student visual auditory ...Subtask 05.05. For example, a trainer to identify and interpret auditory signals would not be represented in the above list. Trainers in the vehicle

  20. Magnetoencephalographic Imaging of Auditory and Somatosensory Cortical Responses in Children with Autism and Sensory Processing Dysfunction

    PubMed Central

    Demopoulos, Carly; Yu, Nina; Tripp, Jennifer; Mota, Nayara; Brandes-Aitken, Anne N.; Desai, Shivani S.; Hill, Susanna S.; Antovich, Ashley D.; Harris, Julia; Honma, Susanne; Mizuiri, Danielle; Nagarajan, Srikantan S.; Marco, Elysa J.

    2017-01-01

    This study compared magnetoencephalographic (MEG) imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18), those with sensory processing dysfunction (SPD; N = 13) who do not meet ASD criteria, and typically developing control (TDC; N = 19) participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain. PMID:28603492

  1. Neuroendocrine control of seasonal plasticity in the auditory and vocal systems of fish

    PubMed Central

    Forlano, Paul M.; Sisneros, Joseph A.; Rohmann, Kevin N.; Bass, Andrew H.

    2014-01-01

    Seasonal changes in reproductive-related vocal behavior are widespread among fishes. This review highlights recent studies of the vocal plainfin midshipman fish, Porichthys notatus, a neuroethological model system used for the past two decades to explore neural and endocrine mechanisms of vocal-acoustic social behaviors shared with tetrapods. Integrative approaches combining behavior, neurophysiology, neuropharmacology, neuroanatomy, and gene expression methodologies have taken advantage of simple, stereotyped and easily quantifiable behaviors controlled by discrete neural networks in this model system to enable discoveries such as the first demonstration of adaptive seasonal plasticity in the auditory periphery of a vertebrate as well as rapid steroid and neuropeptide effects on vocal physiology and behavior. This simple model system has now revealed cellular and molecular mechanisms underlying seasonal and steroid-driven auditory and vocal plasticity in the vertebrate brain. PMID:25168757

  2. Distribution of glutamatergic, GABAergic, and glycinergic neurons in the auditory pathways of macaque monkeys.

    PubMed

    Ito, T; Inoue, K; Takada, M

    2015-12-03

    Macaque monkeys use complex communication calls and are regarded as a model for studying the coding and decoding of complex sound in the auditory system. However, little is known about the distribution of excitatory and inhibitory neurons in the auditory system of macaque monkeys. In this study, we examined the overall distribution of cell bodies that expressed mRNAs for VGLUT1, and VGLUT2 (markers for glutamatergic neurons), GAD67 (a marker for GABAergic neurons), and GLYT2 (a marker for glycinergic neurons) in the auditory system of the Japanese macaque. In addition, we performed immunohistochemistry for VGLUT1, VGLUT2, and GAD67 in order to compare the distribution of proteins and mRNAs. We found that most of the excitatory neurons in the auditory brainstem expressed VGLUT2. In contrast, the expression of VGLUT1 mRNA was restricted to the auditory cortex (AC), periolivary nuclei, and cochlear nuclei (CN). The co-expression of GAD67 and GLYT2 mRNAs was common in the ventral nucleus of the lateral lemniscus (VNLL), CN, and superior olivary complex except for the medial nucleus of the trapezoid body, which expressed GLYT2 alone. In contrast, the dorsal nucleus of the lateral lemniscus, inferior colliculus, thalamus, and AC expressed GAD67 alone. The absence of co-expression of VGLUT1 and VGLUT2 in the medial geniculate, medial superior olive, and VNLL suggests that synaptic responses in the target neurons of these nuclei may be different between rodents and macaque monkeys. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. Specialization of the auditory system for the processing of bio-sonar information in the frequency domain: Mustached bats.

    PubMed

    Suga, Nobuo

    2018-04-01

    For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Neuroanatomical Evidence for Catecholamines as Modulators of Audition and Acoustic Behavior in a Vocal Teleost.

    PubMed

    Forlano, Paul M; Sisneros, Joseph A

    2016-01-01

    The plainfin midshipman fish (Porichthys notatus) is a well-studied model to understand the neural and endocrine mechanisms underlying vocal-acoustic communication across vertebrates. It is well established that steroid hormones such as estrogen drive seasonal peripheral auditory plasticity in female Porichthys in order to better encode the male's advertisement call. However, little is known of the neural substrates that underlie the motivation and coordinated behavioral response to auditory social signals. Catecholamines, which include dopamine and noradrenaline, are good candidates for this function, as they are thought to modulate the salience of and reinforce appropriate behavior to socially relevant stimuli. This chapter summarizes our recent studies which aimed to characterize catecholamine innervation in the central and peripheral auditory system of Porichthys as well as test the hypotheses that innervation of the auditory system is seasonally plastic and catecholaminergic neurons are activated in response to conspecific vocalizations. Of particular significance is the discovery of direct dopaminergic innervation of the saccule, the main hearing end organ, by neurons in the diencephalon, which also robustly innervate the cholinergic auditory efferent nucleus in the hindbrain. Seasonal changes in dopamine innervation in both these areas appear dependent on reproductive state in females and may ultimately function to modulate the sensitivity of the peripheral auditory system as an adaptation to the seasonally changing soundscape. Diencephalic dopaminergic neurons are indeed active in response to exposure to midshipman vocalizations and are in a perfect position to integrate the detection and appropriate motor response to conspecific acoustic signals for successful reproduction.

  5. Asymmetric Hearing During Development: The Aural Preference Syndrome and Treatment Options.

    PubMed

    Gordon, Karen; Henkin, Yael; Kral, Andrej

    2015-07-01

    Deafness affects ∼2 in 1000 children and is one of the most common congenital impairments. Permanent hearing loss can be treated by fitting hearing aids. More severe to profound deafness is an indication for cochlear implantation. Although newborn hearing screening programs have increased the identification of asymmetric hearing loss, parents and caregivers of children with single-sided deafness are often hesitant to pursue therapy for the deaf ear. Delayed intervention has consequences for recovery of hearing. It has long been reported that asymmetric hearing loss/single-sided deafness compromises speech and language development and educational outcomes in children. Recent studies in animal models of deafness and in children consistently show evidence of an "aural preference syndrome" in which single-sided deafness in early childhood reorganizes the developing auditory pathways toward the hearing ear, with weaker central representation of the deaf ear. Delayed therapy consequently compromises benefit for the deaf ear, with slow rates of improvement measured over time. Therefore, asymmetric hearing needs early identification and intervention. Providing early effective stimulation in both ears through appropriate fitting of auditory prostheses, including hearing aids and cochlear implants, within a sensitive period in development has a cardinal role for securing the function of the impaired ear and for restoring binaural/spatial hearing. The impacts of asymmetric hearing loss on the developing auditory system and on spoken language development have often been underestimated. Thus, the traditional minimalist approach to clinical management aimed at 1 functional ear should be modified on the basis of current evidence. Copyright © 2015 by the American Academy of Pediatrics.

  6. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence.

    PubMed

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective.

  7. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    PubMed Central

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  8. Estradiol-dependent modulation of auditory processing and selectivity in songbirds

    PubMed Central

    Maney, Donna; Pinaud, Raphael

    2011-01-01

    The steroid hormone estradiol plays an important role in reproductive development and behavior and modulates a wide array of physiological and cognitive processes. Recently, reports from several research groups have converged to show that estradiol also powerfully modulates sensory processing, specifically, the physiology of central auditory circuits in songbirds. These investigators have discovered that (1) behaviorally-relevant auditory experience rapidly increases estradiol levels in the auditory forebrain; (2) estradiol instantaneously enhances the responsiveness and coding efficiency of auditory neurons; (3) these changes are mediated by a non-genomic effect of brain-generated estradiol on the strength of inhibitory neurotransmission; and (4) estradiol regulates biochemical cascades that induce the expression of genes involved in synaptic plasticity. Together, these findings have established estradiol as a central regulator of auditory function and intensified the need to consider brain-based mechanisms, in addition to peripheral organ dysfunction, in hearing pathologies associated with estrogen deficiency. PMID:21146556

  9. Auditory and Visual Attention Performance in Children With ADHD: The Attentional Deficiency of ADHD Is Modality Specific.

    PubMed

    Lin, Hung-Yu; Hsieh, Hsieh-Chun; Lee, Posen; Hong, Fu-Yuan; Chang, Wen-Dien; Liu, Kuo-Cheng

    2017-08-01

    This study explored auditory and visual attention in children with ADHD. In a randomized, two-period crossover design, 50 children with ADHD and 50 age- and sex-matched typically developing peers were measured with the Test of Various Attention (TOVA). The deficiency of visual attention is more serious than that of auditory attention in children with ADHD. On the auditory modality, only the deficit of attentional inconsistency is sufficient to explain most cases of ADHD; however, most of the children with ADHD suffered from deficits of sustained attention, response inhibition, and attentional inconsistency on the visual modality. Our results also showed that the deficit of attentional inconsistency is the most important indicator in diagnosing and intervening in ADHD when both auditory and visual modalities are considered. The findings provide strong evidence that the deficits of auditory attention are different from those of visual attention in children with ADHD.

  10. Auditory hallucinations: nomenclature and classification.

    PubMed

    Blom, Jan Dirk; Sommer, Iris E C

    2010-03-01

    The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an overview of the nomenclature and classification of auditory hallucinations. Relevant data were obtained from books, PubMed, Embase, and the Cochrane Library. The results are presented in the form of several classificatory arrangements of auditory hallucinations, governed by the principles of content, perceived source, perceived vivacity, relation to the sleep-wake cycle, and association with suspected neurobiologic correlates. This overview underscores the necessity to reappraise the concepts of auditory hallucinations developed during the era of classic psychiatry, to incorporate them into our current nomenclature and classification of auditory hallucinations, and to test them empirically with the aid of the structural and functional imaging techniques currently available.

  11. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions

    PubMed Central

    Intskirveli, Irakli

    2017-01-01

    Abstract Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral “notch” of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing. PMID:28660244

  12. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions.

    PubMed

    Askew, Caitlin; Intskirveli, Irakli; Metherate, Raju

    2017-01-01

    Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral "notch" of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing.

  13. Sentence Comprehension in Adolescents with down Syndrome and Typically Developing Children: Role of Sentence Voice, Visual Context, and Auditory-Verbal Short-Term Memory.

    ERIC Educational Resources Information Center

    Miolo, Giuliana; Chapman, Robins S.; Sindberg, Heidi A.

    2005-01-01

    The authors evaluated the roles of auditory-verbal short-term memory, visual short-term memory, and group membership in predicting language comprehension, as measured by an experimental sentence comprehension task (SCT) and the Test for Auditory Comprehension of Language--Third Edition (TACL-3; E. Carrow-Woolfolk, 1999) in 38 participants: 19 with…

  14. Central Auditory Development: Evidence from CAEP Measurements in Children Fit with Cochlear Implants

    ERIC Educational Resources Information Center

    Dorman, Michael F.; Sharma, Anu; Gilley, Phillip; Martin, Kathryn; Roland, Peter

    2007-01-01

    In normal-hearing children the latency of the P1 component of the cortical evoked response to sound varies as a function of age and, thus, can be used as a biomarker for maturation of central auditory pathways. We assessed P1 latency in 245 congenitally deaf children fit with cochlear implants following various periods of auditory deprivation. If…

  15. Auditory Temporal Information Processing in Preschool Children at Family Risk for Dyslexia: Relations with Phonological Abilities and Developing Literacy Skills

    ERIC Educational Resources Information Center

    Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol

    2006-01-01

    In this project, the hypothesis of an auditory temporal processing deficit in dyslexia was tested by examining auditory processing in relation to phonological skills in two contrasting groups of five-year-old preschool children, a familial high risk and a familial low risk group. Participants were individually matched for gender, age, non-verbal…

  16. An autism-associated serotonin transporter variant disrupts multisensory processing.

    PubMed

    Siemann, J K; Muller, C L; Forsberg, C G; Blakely, R D; Veenstra-VanderWeele, J; Wallace, M T

    2017-03-21

    Altered sensory processing is observed in many children with autism spectrum disorder (ASD), with growing evidence that these impairments extend to the integration of information across the different senses (that is, multisensory function). The serotonin system has an important role in sensory development and function, and alterations of serotonergic signaling have been suggested to have a role in ASD. A gain-of-function coding variant in the serotonin transporter (SERT) associates with sensory aversion in humans, and when expressed in mice produces traits associated with ASD, including disruptions in social and communicative function and repetitive behaviors. The current study set out to test whether these mice also exhibit changes in multisensory function when compared with wild-type (WT) animals on the same genetic background. Mice were trained to respond to auditory and visual stimuli independently before being tested under visual, auditory and paired audiovisual (multisensory) conditions. WT mice exhibited significant gains in response accuracy under audiovisual conditions. In contrast, although the SERT mutant animals learned the auditory and visual tasks comparably to WT littermates, they failed to show behavioral gains under multisensory conditions. We believe these results provide the first behavioral evidence of multisensory deficits in a genetic mouse model related to ASD and implicate the serotonin system in multisensory processing and in the multisensory changes seen in ASD.

  17. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture

    PubMed Central

    2017-01-01

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238

  18. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    PubMed

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  19. A Dual-Stream Neuroanatomy of Singing

    PubMed Central

    Loui, Psyche

    2015-01-01

    Singing requires effortless and efficient use of auditory and motor systems that center around the perception and production of the human voice. Although perception and production are usually tightly coupled functions, occasional mismatches between the two systems inform us of dissociable pathways in the brain systems that enable singing. Here I review the literature on perception and production in the auditory modality, and propose a dual-stream neuroanatomical model that subserves singing. I will discuss studies surrounding the neural functions of feedforward, feedback, and efference systems that control vocal monitoring, as well as the white matter pathways that connect frontal and temporal regions that are involved in perception and production. I will also consider disruptions of the perception-production network that are evident in tone-deaf individuals and poor pitch singers. Finally, by comparing expert singers against other musicians and nonmusicians, I will evaluate the possibility that singing training might offer rehabilitation from these disruptions through neuroplasticity of the perception-production network. Taken together, the best available evidence supports a model of dorsal and ventral pathways in auditory-motor integration that enables singing and is shared with language, music, speech, and human interactions in the auditory environment. PMID:26120242

  20. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    NASA Astrophysics Data System (ADS)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

Top