Sample records for maintaining normal hearing

  1. Low empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls.

    PubMed

    Netten, Anouk P; Rieffe, Carolien; Theunissen, Stephanie C P M; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J; Frijns, Johan H M

    2015-01-01

    The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior. Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.

  2. Low Empathy in Deaf and Hard of Hearing (Pre)Adolescents Compared to Normal Hearing Controls

    PubMed Central

    Netten, Anouk P.; Rieffe, Carolien; Theunissen, Stephanie C. P. M.; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J.; Frijns, Johan H. M.

    2015-01-01

    Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children’s level of empathy, their attendance to others’ emotions, emotion recognition, and supportive behavior. Results Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Conclusions Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships. PMID:25906365

  3. Hearing versus Listening: Attention to Speech and Its Role in Language Acquisition in Deaf Infants with Cochlear Implants

    PubMed Central

    Houston, Derek M.; Bergeson, Tonya R.

    2013-01-01

    The advent of cochlear implantation has provided thousands of deaf infants and children access to speech and the opportunity to learn spoken language. Whether or not deaf infants successfully learn spoken language after implantation may depend in part on the extent to which they listen to speech rather than just hear it. We explore this question by examining the role that attention to speech plays in early language development according to a prominent model of infant speech perception – Jusczyk’s WRAPSA model – and by reviewing the kinds of speech input that maintains normal-hearing infants’ attention. We then review recent findings suggesting that cochlear-implanted infants’ attention to speech is reduced compared to normal-hearing infants and that speech input to these infants differs from input to infants with normal hearing. Finally, we discuss possible roles attention to speech may play on deaf children’s language acquisition after cochlear implantation in light of these findings and predictions from Jusczyk’s WRAPSA model. PMID:24729634

  4. Effect of training on word-recognition performance in noise for young normal-hearing and older hearing-impaired listeners.

    PubMed

    Burk, Matthew H; Humes, Larry E; Amos, Nathan E; Strauser, Lauren E

    2006-06-01

    The objective of this study was to evaluate the effectiveness of a training program for hearing-impaired listeners to improve their speech-recognition performance within a background noise when listening to amplified speech. Both noise-masked young normal-hearing listeners, used to model the performance of elderly hearing-impaired listeners, and a group of elderly hearing-impaired listeners participated in the study. Of particular interest was whether training on an isolated word list presented by a standardized talker can generalize to everyday speech communication across novel talkers. Word-recognition performance was measured for both young normal-hearing (n = 16) and older hearing-impaired (n = 7) adults. Listeners were trained on a set of 75 monosyllabic words spoken by a single female talker over a 9- to 14-day period. Performance for the familiar (trained) talker was measured before and after training in both open-set and closed-set response conditions. Performance on the trained words of the familiar talker were then compared with those same words spoken by three novel talkers and to performance on a second set of untrained words presented by both the familiar and unfamiliar talkers. The hearing-impaired listeners returned 6 mo after their initial training to examine retention of the trained words as well as their ability to transfer any knowledge gained from word training to sentences containing both trained and untrained words. Both young normal-hearing and older hearing-impaired listeners performed significantly better on the word list in which they were trained versus a second untrained list presented by the same talker. Improvements on the untrained words were small but significant, indicating some generalization to novel words. The large increase in performance on the trained words, however, was maintained across novel talkers, pointing to the listener's greater focus on lexical memorization of the words rather than a focus on talker-specific acoustic characteristics. On return in 6 mo, listeners performed significantly better on the trained words relative to their initial baseline performance. Although the listeners performed significantly better on trained versus untrained words in isolation, once the trained words were embedded in sentences, no improvement in recognition over untrained words within the same sentences was shown. Older hearing-impaired listeners were able to significantly improve their word-recognition abilities through training with one talker and to the same degree as young normal-hearing listeners. The improved performance was maintained across talkers and across time. This might imply that training a listener using a standardized list and talker may still provide benefit when these same words are presented by novel talkers outside the clinic. However, training on isolated words was not sufficient to transfer to fluent speech for the specific sentence materials used within this study. Further investigation is needed regarding approaches to improve a hearing aid user's speech understanding in everyday communication situations.

  5. Pilot study of cognition in children with unilateral hearing loss.

    PubMed

    Ead, Banan; Hale, Sandra; DeAlwis, Duneesha; Lieu, Judith E C

    2013-11-01

    The objective of this study was to obtain preliminary data on the cognitive function of children with unilateral hearing loss in order to identify, quantify, and interpret differences in cognitive and language functions between children with unilateral hearing loss and with normal hearing. Fourteen children ages 9-14 years old (7 with severe-to-profound sensorineural unilateral hearing loss and 7 sibling controls with normal hearing) were administered five tests that assessed cognitive functions of working memory, processing speed, attention, and phonological processing. Mean composite scores for phonological processing were significantly lower for the group with unilateral hearing loss than for controls on one composite and four subtests. The unilateral hearing loss group trended toward worse performance on one additional composite and on two additional phonological processing subtests. The unilateral hearing loss group also performed worse than the control group on the complex letter span task. Analysis examining performance on the two levels of task difficulty revealed a significant main effect of task difficulty and an interaction between task difficulty and group. Cognitive function and phonological processing test results suggest two related deficits associated with unilateral hearing loss: (1) reduced accuracy and efficiency associated with phonological processing, and (2) impaired executive control function when engaged in maintaining verbal information in the face of processing incoming, irrelevant verbal information. These results provide a possible explanation for the educational difficulties experienced by children with unilateral hearing loss. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Effect of signal to noise ratio on the speech perception ability of older adults

    PubMed Central

    Shojaei, Elahe; Ashayeri, Hassan; Jafari, Zahra; Zarrin Dast, Mohammad Reza; Kamali, Koorosh

    2016-01-01

    Background: Speech perception ability depends on auditory and extra-auditory elements. The signal- to-noise ratio (SNR) is an extra-auditory element that has an effect on the ability to normally follow speech and maintain a conversation. Speech in noise perception difficulty is a common complaint of the elderly. In this study, the importance of SNR magnitude as an extra-auditory effect on speech perception in noise was examined in the elderly. Methods: The speech perception in noise test (SPIN) was conducted on 25 elderly participants who had bilateral low–mid frequency normal hearing thresholds at three SNRs in the presence of ipsilateral white noise. These participants were selected by available sampling method. Cognitive screening was done using the Persian Mini Mental State Examination (MMSE) test. Results: Independent T- test, ANNOVA and Pearson Correlation Index were used for statistical analysis. There was a significant difference in word discrimination scores at silence and at three SNRs in both ears (p≤0.047). Moreover, there was a significant difference in word discrimination scores for paired SNRs (0 and +5, 0 and +10, and +5 and +10 (p≤0.04)). No significant correlation was found between age and word recognition scores at silence and at three SNRs in both ears (p≥0.386). Conclusion: Our results revealed that decreasing the signal level and increasing the competing noise considerably reduced the speech perception ability in normal hearing at low–mid thresholds in the elderly. These results support the critical role of SNRs for speech perception ability in the elderly. Furthermore, our results revealed that normal hearing elderly participants required compensatory strategies to maintain normal speech perception in challenging acoustic situations. PMID:27390712

  7. Perceptions & Attitudes of Male Homosexuals from Differing Socio-Cultural & Audiological Backgrounds.

    ERIC Educational Resources Information Center

    Swartz, Daniel B.

    This study examined four male homosexual, sociocultural groups: normal-hearing homosexuals with normal-hearing parents, deaf homosexuals with normal-hearing parents, deaf homosexuals with hearing-impaired parents, and hard-of-hearing homosexuals with normal-hearing parents. Differences with regard to self-perception, identity, and attitudes were…

  8. Information processing of visually presented picture and word stimuli by young hearing-impaired and normal-hearing children.

    PubMed

    Kelly, R R; Tomlison-Keasey, C

    1976-12-01

    Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.

  9. Evaluation of Extended-wear Hearing Aid Technology for Operational Military Use

    DTIC Science & Technology

    2017-07-01

    for a transparent hearing protection device that could protect the hearing of normal-hearing listeners without degrading auditory situational...method, suggest that continuous noise protection is also comparable to conventional earplug devices. Behavioral testing on listeners with normal...associated with the extended-wear hearing aid could be adapted to provide long-term hearing protection for listeners with normal hearing with minimal

  10. Early Radiosurgery Improves Hearing Preservation in Vestibular Schwannoma Patients With Normal Hearing at the Time of Diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akpinar, Berkcan; Mousavi, Seyed H., E-mail: mousavish@upmc.edu; McDowell, Michael M.

    Purpose: Vestibular schwannomas (VS) are increasingly diagnosed in patients with normal hearing because of advances in magnetic resonance imaging. We sought to evaluate whether stereotactic radiosurgery (SRS) performed earlier after diagnosis improved long-term hearing preservation in this population. Methods and Materials: We queried our quality assessment registry and found the records of 1134 acoustic neuroma patients who underwent SRS during a 15-year period (1997-2011). We identified 88 patients who had VS but normal hearing with no subjective hearing loss at the time of diagnosis. All patients were Gardner-Robertson (GR) class I at the time of SRS. Fifty-seven patients underwent earlymore » (≤2 years from diagnosis) SRS and 31 patients underwent late (>2 years after diagnosis) SRS. At a median follow-up time of 75 months, we evaluated patient outcomes. Results: Tumor control rates (decreased or stable in size) were similar in the early (95%) and late (90%) treatment groups (P=.73). Patients in the early treatment group retained serviceable (GR class I/II) hearing and normal (GR class I) hearing longer than did patients in the late treatment group (serviceable hearing, P=.006; normal hearing, P<.0001, respectively). At 5 years after SRS, an estimated 88% of the early treatment group retained serviceable hearing and 77% retained normal hearing, compared with 55% with serviceable hearing and 33% with normal hearing in the late treatment group. Conclusions: SRS within 2 years after diagnosis of VS in normal hearing patients resulted in improved retention of all hearing measures compared with later SRS.« less

  11. Designing of a Digital Behind-the-Ear Hearing Aid to Meet the World Health Organization Requirements

    PubMed Central

    Bento, Ricardo Ferreira; Penteado, Silvio Pires

    2010-01-01

    Hearing loss is a common health issue that affects nearly 10% of the world population as indicated by many international studies. The hearing impaired typically experience more frustration, anxiety, irritability, depression, and disorientation than those with normal hearing levels. The standard rehabilitation tool for hearing impairment is an electronic hearing aid whose main components are transducers (microphone and receiver) and a digital signal processor. These electronic components are manufactured by supply chain rather than by hearing aid manufacturers. Manufacturers can use custom-designed components or generic off-the-shelf components. These electronic components are available as application-specific or off-the-shelf products, with the former designed for a specific manufacturer and the latter for a generic approach. The choice of custom or generic components will affect the product specifications, pricing, manufacturing, life cycle, and marketing strategies of the product. The World Health Organization is interested in making available to developing countries hearing aids that are inexpensive to purchase and maintain. The hearing aid presented in this article was developed with these specifications in mind together with additional contemporary features such as four channels with wide dynamic range compression, an adjustable compression rate for each channel, four comfort programs, an adaptive feedback manager, and full volume control. This digital hearing aid is fitted using a personal computer with minimal hardware requirements in intuitive three-step fitting software. A trimmer-adjusted version can be developed where human and material resources are scarce. PMID:20724354

  12. Designing of a digital behind-the-ear hearing aid to meet the World Health Organization requirements.

    PubMed

    Bento, Ricardo Ferreira; Penteado, Silvio Pires

    2010-06-01

    Hearing loss is a common health issue that affects nearly 10% of the world population as indicated by many international studies. The hearing impaired typically experience more frustration, anxiety, irritability, depression, and disorientation than those with normal hearing levels. The standard rehabilitation tool for hearing impairment is an electronic hearing aid whose main components are transducers (microphone and receiver) and a digital signal processor. These electronic components are manufactured by supply chain rather than by hearing aid manufacturers. Manufacturers can use custom-designed components or generic off-the-shelf components. These electronic components are available as application-specific or off-the-shelf products, with the former designed for a specific manufacturer and the latter for a generic approach. The choice of custom or generic components will affect the product specifications, pricing, manufacturing, life cycle, and marketing strategies of the product. The World Health Organization is interested in making available to developing countries hearing aids that are inexpensive to purchase and maintain. The hearing aid presented in this article was developed with these specifications in mind together with additional contemporary features such as four channels with wide dynamic range compression, an adjustable compression rate for each channel, four comfort programs, an adaptive feedback manager, and full volume control. This digital hearing aid is fitted using a personal computer with minimal hardware requirements in intuitive three-step fitting software. A trimmer-adjusted version can be developed where human and material resources are scarce.

  13. Rapid Release From Listening Effort Resulting From Semantic Context, and Effects of Spectral Degradation and Cochlear Implants

    PubMed Central

    2016-01-01

    People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability. PMID:27698260

  14. Deletion of SLC19A2, the high affinity thiamine transporter, causes selective inner hair cell loss and an auditory neuropathy phenotype.

    PubMed

    Liberman, M C; Tartaglini, E; Fleming, J C; Neufeld, E J

    2006-09-01

    Mutations in the gene coding for the high-affinity thiamine transporter Slc19a2 underlie the clinical syndrome known as thiamine-responsive megaloblastic anemia (TRMA) characterized by anemia, diabetes, and sensorineural hearing loss. To create a mouse model of this disease, a mutant line was created with targeted disruption of the gene. Cochlear function is normal in these mutants when maintained on a high-thiamine diet. When challenged with a low-thiamine diet, Slc19a2-null mice showed 40-60 dB threshold elevations by auditory brainstem response (ABR), but only 10-20 dB elevation by otoacoustic emission (OAE) measures. Wild-type mice retain normal hearing on either diet. Cochlear histological analysis showed a pattern uncommon for sensorineural hearing loss: selective loss of inner hair cells after 1-2 weeks on low thiamine and significantly greater inner than outer hair cell loss after longer low-thiamine challenges. Such a pattern is consistent with the observed discrepancy between ABR and OAE threshold shifts. The possible role of thiamine transport in other reported cases of selective inner hair cell loss is considered.

  15. An Evaluation of the BKB-SIN, HINT, QuickSIN, and WIN Materials on Listeners with Normal Hearing and Listeners with Hearing Loss

    ERIC Educational Resources Information Center

    Wilson, Richard H.; McArdle, Rachel A.; Smith, Sherri L.

    2007-01-01

    Purpose: The purpose of this study was to examine in listeners with normal hearing and listeners with sensorineural hearing loss the within- and between-group differences obtained with 4 commonly available speech-in-noise protocols. Method: Recognition performances by 24 listeners with normal hearing and 72 listeners with sensorineural hearing…

  16. Study of menstrual patterns in adolescent girls with disabilities in a residential institution.

    PubMed

    Joshi, Ganesh Arun; Joshi, Prajakta Ganesh

    2015-02-01

    The gynecological health needs of girls with disabilities is an issue related to their rights as individuals. The objective of this study is to describe the menstrual pattern of girls with disabilities. A descriptive study was undertaken on thirty girls with different types of disabilities in a residential institution. The diagnosis, type of disability, secondary sexual characters, age at menarche, menstrual pattern and practice of menstrual hygiene was noted. The girls with intellectual disabilities had later age of menarche, irregular cycles and more behaviour problems. The girls with hearing impairment and locomotor disabilities had normal menstrual pattern. The girl with low vision had earlier menarche and regularized cycles. Girls with normal intelligence and mild intellectual disabilities were independent in maintaining menstrual hygiene. The menstrual disorders are managed conservatively in accordance with latest guidelines. Onset of menarche is towards the extremes of normal age range in girls with intellectual disabilities or visual impairment but not in girls with hearing impairments or locomotor disabilities. Girls with disabilities have potential for independent menstrual care. Menstrual disorders were managed conservatively.

  17. Level-dependent changes in detection of temporal gaps in noise markers by adults with normal and impaired hearing

    PubMed Central

    Horwitz, Amy R.; Ahlstrom, Jayne B.; Dubno, Judy R.

    2011-01-01

    Compression in the basilar-membrane input–output response flattens the temporal envelope of a fluctuating signal when more gain is applied to lower level than higher level temporal components. As a result, level-dependent changes in gap detection for signals with different depths of envelope fluctuation and for subjects with normal and impaired hearing may reveal effects of compression. To test these assumptions, gap detection with and without a broadband noise was measured with 1 000-Hz-wide (flatter) and 50-Hz-wide (fluctuating) noise markers as a function of marker level. As marker level increased, background level also increased, maintaining a fixed acoustic signal-to-noise ratio (SNR) to minimize sensation-level effects on gap detection. Significant level-dependent changes in gap detection were observed, consistent with effects of cochlear compression. For the flatter marker, gap detection that declines with increases in level up to mid levels and improves with further increases in level may be explained by an effective flattening of the temporal envelope at mid levels, where compression effects are expected to be strongest. A flatter effective temporal envelope corresponds to a reduced effective SNR. The effects of a reduction in compression (resulting in larger effective SNRs) may contribute to better-than-normal gap detection observed for some hearing-impaired listeners. PMID:22087921

  18. Gentamicin, genetic variation and deafness in preterm children

    PubMed Central

    2014-01-01

    Background Hearing loss in children born before 32 weeks of gestation is more prevalent than in full term infants. Aminoglycoside antibiotics are routinely used to treat bacterial infections in babies on neonatal intensive care units. However, this type of medication can have harmful effects on the auditory system. In order to avoid this blood levels should be maintained in the therapeutic range. However in individuals with a mitochondrial genetic variant (m.1555A > G), permanent hearing loss can occur even when drug levels are within normal limits. The aim of the study is to investigate the burden that the m.1555A > G mutation represents to deafness in very preterm infants. Method This is a case control study of children born at less than 32 completed weeks of gestation with confirmed hearing loss. Children in the control group will be matched for sex, gestational age and neonatal intensive care unit on which they were treated, and will have normal hearing. Saliva samples will be taken from children in both groups; DNA will be extracted and tested for the mutation. Retrospective pharmacological data and clinical history will be abstracted from the medical notes. Risk associated with gentamicin, m.1555A > G and other co-morbid risk factors will be evaluated using conditional logistic regression. Discussion If there is an increased burden of hearing loss with m.1555A > G and aminoglycoside use, consideration will be given to genetic testing during pregnancy, postnatal testing prior to drug administration, or the use of an alternative first line antibiotic. Detailed perinatal data collection will also allow greater definition of the causal pathway of acquired hearing loss in very preterm children. PMID:24593698

  19. Interactions of speaking condition and auditory feedback on vowel production in postlingually deaf adults with cochlear implants.

    PubMed

    Ménard, Lucie; Polak, Marek; Denny, Margaret; Burton, Ellen; Lane, Harlan; Matthies, Melanie L; Marrone, Nicole; Perkell, Joseph S; Tiede, Mark; Vick, Jennell

    2007-06-01

    This study investigates the effects of speaking condition and auditory feedback on vowel production by postlingually deafened adults. Thirteen cochlear implant users produced repetitions of nine American English vowels prior to implantation, and at one month and one year after implantation. There were three speaking conditions (clear, normal, and fast), and two feedback conditions after implantation (implant processor turned on and off). Ten normal-hearing controls were also recorded once. Vowel contrasts in the formant space (expressed in mels) were larger in the clear than in the fast condition, both for controls and for implant users at all three time samples. Implant users also produced differences in duration between clear and fast conditions that were in the range of those obtained from the controls. In agreement with prior work, the implant users had contrast values lower than did the controls. The implant users' contrasts were larger with hearing on than off and improved from one month to one year postimplant. Because the controls and implant users responded similarly to a change in speaking condition, it is inferred that auditory feedback, although demonstrably important for maintaining normative values of vowel contrasts, is not needed to maintain the distinctiveness of those contrasts in different speaking conditions.

  20. Head Position Comparison between Students with Normal Hearing and Students with Sensorineural Hearing Loss.

    PubMed

    Melo, Renato de Souza; Amorim da Silva, Polyanna Waleska; Souza, Robson Arruda; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2013-10-01

    Introduction Head sense position is coordinated by sensory activity of the vestibular system, located in the inner ear. Children with sensorineural hearing loss may show changes in the vestibular system as a result of injury to the inner ear, which can alter the sense of head position in this population. Aim Analyze the head alignment in students with normal hearing and students with sensorineural hearing loss and compare the data between groups. Methods This prospective cross-sectional study examined the head alignment of 96 students, 48 with normal hearing and 48 with sensorineural hearing loss, aged between 7 and 18 years. The analysis of head alignment occurred through postural assessment performed according to the criteria proposed by Kendall et al. For data analysis we used the chi-square test or Fisher exact test. Results The students with hearing loss had a higher occurrence of changes in the alignment of the head than normally hearing students (p < 0.001). Forward head posture was the type of postural change observed most, occurring in greater proportion in children with hearing loss (p < 0.001), followed by the side slope head posture (p < 0.001). Conclusion Children with sensorineural hearing loss showed more changes in the head posture compared with children with normal hearing.

  1. Head Position Comparison between Students with Normal Hearing and Students with Sensorineural Hearing Loss

    PubMed Central

    Melo, Renato de Souza; Amorim da Silva, Polyanna Waleska; Souza, Robson Arruda; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2013-01-01

    Introduction Head sense position is coordinated by sensory activity of the vestibular system, located in the inner ear. Children with sensorineural hearing loss may show changes in the vestibular system as a result of injury to the inner ear, which can alter the sense of head position in this population. Aim Analyze the head alignment in students with normal hearing and students with sensorineural hearing loss and compare the data between groups. Methods This prospective cross-sectional study examined the head alignment of 96 students, 48 with normal hearing and 48 with sensorineural hearing loss, aged between 7 and 18 years. The analysis of head alignment occurred through postural assessment performed according to the criteria proposed by Kendall et al. For data analysis we used the chi-square test or Fisher exact test. Results The students with hearing loss had a higher occurrence of changes in the alignment of the head than normally hearing students (p < 0.001). Forward head posture was the type of postural change observed most, occurring in greater proportion in children with hearing loss (p < 0.001), followed by the side slope head posture (p < 0.001). Conclusion Children with sensorineural hearing loss showed more changes in the head posture compared with children with normal hearing. PMID:25992037

  2. High-Level Psychophysical Tuning Curves: Forward Masking in Normal-Hearing and Hearing-Impaired Listeners.

    ERIC Educational Resources Information Center

    Nelson, David A.

    1991-01-01

    Forward-masked psychophysical tuning curves were obtained at multiple probe levels from 26 normal-hearing listeners and 24 ears of 21 hearing-impaired listeners with cochlear hearing loss. Results indicated that some cochlear hearing losses influence the sharp tuning capabilities usually associated with outer hair cell function. (Author/JDD)

  3. Cortical Auditory Evoked Potentials in (Un)aided Normal-Hearing and Hearing-Impaired Adults

    PubMed Central

    Van Dun, Bram; Kania, Anna; Dillon, Harvey

    2016-01-01

    Cortical auditory evoked potentials (CAEPs) are influenced by the characteristics of the stimulus, including level and hearing aid gain. Previous studies have measured CAEPs aided and unaided in individuals with normal hearing. There is a significant difference between providing amplification to a person with normal hearing and a person with hearing loss. This study investigated this difference and the effects of stimulus signal-to-noise ratio (SNR) and audibility on the CAEP amplitude in a population with hearing loss. Twelve normal-hearing participants and 12 participants with a hearing loss participated in this study. Three speech sounds—/m/, /g/, and /t/—were presented in the free field. Unaided stimuli were presented at 55, 65, and 75 dB sound pressure level (SPL) and aided stimuli at 55 dB SPL with three different gains in steps of 10 dB. CAEPs were recorded and their amplitudes analyzed. Stimulus SNRs and audibility were determined. No significant effect of stimulus level or hearing aid gain was found in normal hearers. Conversely, a significant effect was found in hearing-impaired individuals. Audibility of the signal, which in some cases is determined by the signal level relative to threshold and in other cases by the SNR, is the dominant factor explaining changes in CAEP amplitude. CAEPs can potentially be used to assess the effects of hearing aid gain in hearing-impaired users. PMID:27587919

  4. Speech-evoked auditory brainstem responses in children with hearing loss.

    PubMed

    Koravand, Amineh; Al Osman, Rida; Rivest, Véronique; Poulin, Catherine

    2017-08-01

    The main objective of the present study was to investigate subcortical auditory processing in children with sensorineural hearing loss. Auditory Brainstem Responses (ABRs) were recorded using click and speech/da/stimuli. Twenty-five children, aged 6-14 years old, participated in the study: 13 with normal hearing acuity and 12 with sensorineural hearing loss. No significant differences were observed for the click-evoked ABRs between normal hearing and hearing-impaired groups. For the speech-evoked ABRs, no significant differences were found for the latencies of the following responses between the two groups: onset (V and A), transition (C), one of the steady-state wave (F), and offset (O). However, the latency of the steady-state waves (D and E) was significantly longer for the hearing-impaired compared to the normal hearing group. Furthermore, the amplitude of the offset wave O and of the envelope frequency response (EFR) of the speech-evoked ABRs was significantly larger for the hearing-impaired compared to the normal hearing group. Results obtained from the speech-evoked ABRs suggest that children with a mild to moderately-severe sensorineural hearing loss have a specific pattern of subcortical auditory processing. Our results show differences for the speech-evoked ABRs in normal hearing children compared to hearing-impaired children. These results add to the body of the literature on how children with hearing loss process speech at the brainstem level. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Cortical processing of speech in individuals with auditory neuropathy spectrum disorder.

    PubMed

    Apeksha, Kumari; Kumar, U Ajith

    2018-06-01

    Auditory neuropathy spectrum disorder (ANSD) is a condition where cochlear amplification function (involving outer hair cells) is normal but neural conduction in the auditory pathway is disordered. This study was done to investigate the cortical representation of speech in individuals with ANSD and to compare it with the individuals with normal hearing. Forty-five participants including 21 individuals with ANSD and 24 individuals with normal hearing were considered for the study. Individuals with ANSD had hearing thresholds ranging from normal hearing to moderate hearing loss. Auditory cortical evoked potentials-through odd ball paradigm-were recorded using 64 electrodes placed on the scalp for /ba/-/da/ stimulus. Onset cortical responses were also recorded in repetitive paradigm using /da/ stimuli. Sensitivity and reaction time required to identify the oddball stimuli were also obtained. Behavioural results indicated that individuals in ANSD group had significantly lower sensitivity and longer reaction times compared to individuals with normal hearing sensitivity. Reliable P300 could be elicited in both the groups. However, a significant difference in scalp topographies was observed between the two groups in both repetitive and oddball paradigms. Source localization using local auto regressive analyses revealed that activations were more diffuses in individuals with ANSD when compared to individuals with normal hearing sensitivity. Results indicated that the brain networks and regions activated in individuals with ANSD during detection and discrimination of speech sounds are different from normal hearing individuals. In general, normal hearing individuals showed more focused activations while in individuals with ANSD activations were diffused.

  6. Print Knowledge of Preschool Children with Hearing Loss

    ERIC Educational Resources Information Center

    Werfel, Krystal L.; Lund, Emily; Schuele, C. Melanie

    2015-01-01

    Measures of print knowledge were compared across preschoolers with hearing loss and normal hearing. Alphabet knowledge did not differ between groups, but preschoolers with hearing loss performed lower on measures of print concepts and concepts of written words than preschoolers with normal hearing. Further study is needed in this area.

  7. Motivation to Address Self-Reported Hearing Problems in Adults with Normal Hearing Thresholds

    ERIC Educational Resources Information Center

    Alicea, Carly C. M.; Doherty, Karen A.

    2017-01-01

    Purpose: The purpose of this study was to compare the motivation to change in relation to hearing problems in adults with normal hearing thresholds but who report hearing problems and that of adults with a mild-to-moderate sensorineural hearing loss. Factors related to their motivation were also assessed. Method: The motivation to change in…

  8. Use of Adaptive Digital Signal Processing to Improve Speech Communication for Normally Hearing aand Hearing-Impaired Subjects.

    ERIC Educational Resources Information Center

    Harris, Richard W.; And Others

    1988-01-01

    A two-microphone adaptive digital noise cancellation technique improved word-recognition ability for 20 normal and 12 hearing-impaired adults by reducing multitalker speech babble and speech spectrum noise 18-22 dB. Word recognition improvements averaged 37-50 percent for normal and 27-40 percent for hearing-impaired subjects. Improvement was best…

  9. Short-Term and Working Memory Impairments in Early-Implanted, Long-Term Cochlear Implant Users Are Independent of Audibility and Speech Production

    PubMed Central

    AuBuchon, Angela M.; Pisoni, David B.; Kronenberger, William G.

    2015-01-01

    OBJECTIVES Determine if early-implanted, long-term cochlear implant (CI) users display delays in verbal short-term and working memory capacity when processes related to audibility and speech production are eliminated. DESIGN Twenty-three long-term CI users and 23 normal-hearing controls each completed forward and backward digit span tasks under testing conditions which differed in presentation modality (auditory or visual) and response output (spoken recall or manual pointing). RESULTS Normal-hearing controls reproduced more lists of digits than the CI users, even when the test items were presented visually and the responses were made manually via touchscreen response. CONCLUSIONS Short-term and working memory delays observed in CI users are not due to greater demands from peripheral sensory processes such as audibility or from overt speech-motor planning and response output organization. Instead, CI users are less efficient at encoding and maintaining phonological representations in verbal short-term memory utilizing phonological and linguistic strategies during memory tasks. PMID:26496666

  10. Short-Term and Working Memory Impairments in Early-Implanted, Long-Term Cochlear Implant Users Are Independent of Audibility and Speech Production.

    PubMed

    AuBuchon, Angela M; Pisoni, David B; Kronenberger, William G

    2015-01-01

    To determine whether early-implanted, long-term cochlear implant (CI) users display delays in verbal short-term and working memory capacity when processes related to audibility and speech production are eliminated. Twenty-three long-term CI users and 23 normal-hearing controls each completed forward and backward digit span tasks under testing conditions that differed in presentation modality (auditory or visual) and response output (spoken recall or manual pointing). Normal-hearing controls reproduced more lists of digits than the CI users, even when the test items were presented visually and the responses were made manually via touchscreen response. Short-term and working memory delays observed in CI users are not due to greater demands from peripheral sensory processes such as audibility or from overt speech-motor planning and response output organization. Instead, CI users are less efficient at encoding and maintaining phonological representations in verbal short-term memory using phonological and linguistic strategies during memory tasks.

  11. Neurotransmission in the human labyrinth.

    PubMed

    Schrott-Fischer, Anneliese; Kammen-Jolly, Keren; Scholtz, Arne W; Kong, Wei-jia; Eybalin, Michel

    2002-01-01

    Different neuroactive substances have been found in the efferent pathways of both the olivocochlear and vestibular systems. In the present study, the distribution and role of three neurotransmitters, choline acetyltransferase (ChAT), gamma aminobutyric acid (GABA), and enkephalin were investigated in the human labyrinth of 4 normal-hearing individuals. Immunohistochemical studies in human inner ear research, however, face a problem of procuring well-preserved specimens with maintained neurotransmitter antigenicity and morphology. Methods and findings are reported and discussed.

  12. The relationship between loudness intensity functions and the click-ABR wave V latency.

    PubMed

    Serpanos, Y C; O'Malley, H; Gravel, J S

    1997-10-01

    To assess the relationship of loudness growth and the click-evoked auditory brain stem response (ABR) wave V latency-intensity function (LIF) in listeners with normal hearing or cochlear hearing loss. The effect of hearing loss configuration on the intensity functions was also examined. Behavioral and electrophysiological intensity functions were obtained using click stimuli of comparable intensities in listeners with normal hearing (Group I; n = 10), and cochlear hearing loss of flat (Group II; n = 10) or sloping (Group III; n = 10) configurations. Individual intensity functions were obtained from measures of loudness growth using the psychophysical methods of absolute magnitude estimation and production of loudness (geometrically averaged to provide the measured loudness function), and from the wave V latency measures of the ABR. Slope analyses for the behavioral and electrophysiological intensity functions were separately performed by group. The loudness growth functions for the groups with cochlear hearing loss approximated the normal function at high intensities, with overall slope values consistent with those reported from previous psychophysical research. The ABR wave V LIF for the group with a flat configuration of cochlear hearing loss approximated the normal function at high intensities, and was displaced parallel to the normal function for the group with sloping configuration. The relationship between the behavioral and electrophysiological intensity functions was examined at individual intensities across the range of the functions for each subject. A significant relationship was obtained between loudness and the ABR wave V LIFs for the groups with normal hearing and flat configuration of cochlear hearing loss; the association was not significant (p = 0.10) for the group with a sloping configuration of cochlear hearing loss. The results of this study established a relationship between loudness and the ABR wave V latency for listeners with normal hearing, and flat cochlear hearing loss. In listeners with a sloping configuration of cochlear hearing loss, the relationship was not significant. This suggests that the click-evoked ABR may be used to estimate loudness growth at least for individuals with normal hearing and those with a flat configuration of cochlear hearing loss. Predictive equations were derived to estimate loudness growth for these groups. The use of frequency-specific stimuli may provide more precise information on the nature of the relationship between loudness growth and the ABR wave V latency, particularly for listeners with sloping configurations of cochlear hearing loss.

  13. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability-Implications for Cochlear Implant Candidacy.

    PubMed

    Firszt, Jill B; Reeder, Ruth M; Holden, Laura K

    At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal-hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal-hearing participant groups were not significantly different for speech in noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments, and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates.

  14. Decision strategies of hearing-impaired listeners in spectral shape discrimination

    NASA Astrophysics Data System (ADS)

    Lentz, Jennifer J.; Leek, Marjorie R.

    2002-03-01

    The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.

  15. How close should a student with unilateral hearing loss stay to a teacher in a noisy classroom?

    PubMed

    Noh, Heil; Park, Yong-Gyu

    2012-06-01

    To determine the optimal seating position in a noisy classroom for students with unilateral hearing loss (UHL) without any auditory rehabilitation as compared to normal-hearing adults and student peers. Speech discrimination scores (SDS) for babble noise at distances of 3, 4, 6, 8, and 10 m from a speaker were measured in a simulated classroom measuring 300 m3 (reverberation time = 0.43 s). Students with UHL (n = 25, 10-19 years old), normal-hearing students (n = 25), and normal-hearing adults (n = 25). The SDS for the normal-hearing adults at the 3, 4, 6, 8, and 10 m distances were 90.0±6.4%, 84.7±7.9%, 80.6±10.0%, 75.5±12.6%, and 68.8±13.0%, respectively. Those for the normal-hearing students were 90.1±6.2%, 78.1±9.4%, 66.4±10.7%, 61.8±11.2%, and 60.8±10.9%. Those for the UHL group were 81.7±9.0%, 70.2±12.4%, 62.1±17.2%, 52.4±17.1%, and 48.9±17.9%. The UHL group needed a seating position of 4.35 m to achieve an equivalent mean SDS as those for normal-hearing adults seated at 10 m. Likewise, the UHL group needed to be seated at 6.27 m to have an equivalent SDS as the normal-hearing students seated at 10 m. Students with UHL in noisy classrooms require seating ranging from 4.35 m to no further than 6.27 m away from a teacher to obtain a SDS comparable to normal hearing adults and student peers.

  16. An Overview of the Major Phenomena of the Localization of Sound Sources by Normal-Hearing, Hearing-Impaired, and Aided Listeners

    PubMed Central

    2014-01-01

    Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094

  17. Predicting social functioning in children with a cochlear implant and in normal-hearing children: the role of emotion regulation.

    PubMed

    Wiefferink, Carin H; Rieffe, Carolien; Ketelaar, Lizet; Frijns, Johan H M

    2012-06-01

    The purpose of the present study was to compare children with a cochlear implant and normal hearing children on aspects of emotion regulation (emotion expression and coping strategies) and social functioning (social competence and externalizing behaviors) and the relation between emotion regulation and social functioning. Participants were 69 children with cochlear implants (CI children) and 67 normal hearing children (NH children) aged 1.5-5 years. Parents answered questionnaires about their children's language skills, social functioning, and emotion regulation. Children also completed simple tasks to measure their emotion regulation abilities. Cochlear implant children had fewer adequate emotion regulation strategies and were less socially competent than normal hearing children. The parents of cochlear implant children did not report fewer externalizing behaviors than those of normal hearing children. While social competence in normal hearing children was strongly related to emotion regulation, cochlear implant children regulated their emotions in ways that were unrelated with social competence. On the other hand, emotion regulation explained externalizing behaviors better in cochlear implant children than in normal hearing children. While better language skills were related to higher social competence in both groups, they were related to fewer externalizing behaviors only in cochlear implant children. Our results indicate that cochlear implant children have less adequate emotion-regulation strategies and less social competence than normal hearing children. Since they received their implants relatively recently, they might eventually catch up with their hearing peers. Longitudinal studies should further explore the development of emotion regulation and social functioning in cochlear implant children. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Chinese Writing of Deaf or Hard-of-Hearing Students and Normal-Hearing Peers from Complex Network Approach.

    PubMed

    Jin, Huiyuan; Liu, Haitao

    2016-01-01

    Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences.

  19. Sound localization in noise in hearing-impaired listeners.

    PubMed

    Lorenzi, C; Gatehouse, S; Lever, C

    1999-06-01

    The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.

  20. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability - Implications for Cochlear Implant Candidacy

    PubMed Central

    Firszt, Jill B.; Reeder, Ruth M.; Holden, Laura K.

    2016-01-01

    Objectives At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of co-variables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc) and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-gender-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal hearing participant groups were not significantly different for speech-in-noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Conclusions Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates. PMID:28067750

  1. Pejvakin, a Candidate Stereociliary Rootlet Protein, Regulates Hair Cell Function in a Cell-Autonomous Manner

    PubMed Central

    Kazmierczak, Piotr; Harris, Suzan L.; Shah, Prahar; Puel, Jean-Luc; Lenoir, Marc

    2017-01-01

    Mutations in the Pejvakin (PJVK) gene are thought to cause auditory neuropathy and hearing loss of cochlear origin by affecting noise-induced peroxisome proliferation in auditory hair cells and neurons. Here we demonstrate that loss of pejvakin in hair cells, but not in neurons, causes profound hearing loss and outer hair cell degeneration in mice. Pejvakin binds to and colocalizes with the rootlet component TRIOBP at the base of stereocilia in injectoporated hair cells, a pattern that is disrupted by deafness-associated PJVK mutations. Hair cells of pejvakin-deficient mice develop normal rootlets, but hair bundle morphology and mechanotransduction are affected before the onset of hearing. Some mechanotransducing shorter row stereocilia are missing, whereas the remaining ones exhibit overextended tips and a greater variability in height and width. Unlike previous studies of Pjvk alleles with neuronal dysfunction, our findings reveal a cell-autonomous role of pejvakin in maintaining stereocilia architecture that is critical for hair cell function. SIGNIFICANCE STATEMENT Two missense mutations in the Pejvakin (PJVK or DFNB59) gene were first identified in patients with audiological hallmarks of auditory neuropathy spectrum disorder, whereas all other PJVK alleles cause hearing loss of cochlear origin. These findings suggest that complex pathogenetic mechanisms underlie human deafness DFNB59. In contrast to recent studies, we demonstrate that pejvakin in auditory neurons is not essential for normal hearing in mice. Moreover, pejvakin localizes to stereociliary rootlets in hair cells and is required for stereocilia maintenance and mechanosensory function of the hair bundle. Delineating the site of the lesion and the mechanisms underlying DFNB59 will allow clinicians to predict the efficacy of different therapeutic approaches, such as determining compatibility for cochlear implants. PMID:28209736

  2. Postural control assessment in students with normal hearing and sensorineural hearing loss.

    PubMed

    Melo, Renato de Souza; Lemos, Andrea; Macky, Carla Fabiana da Silva Toscano; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2015-01-01

    Children with sensorineural hearing loss can present with instabilities in postural control, possibly as a consequence of hypoactivity of their vestibular system due to internal ear injury. To assess postural control stability in students with normal hearing (i.e., listeners) and with sensorineural hearing loss, and to compare data between groups, considering gender and age. This cross-sectional study evaluated the postural control of 96 students, 48 listeners and 48 with sensorineural hearing loss, aged between 7 and 18 years, of both genders, through the Balance Error Scoring Systems scale. This tool assesses postural control in two sensory conditions: stable surface and unstable surface. For statistical data analysis between groups, the Wilcoxon test for paired samples was used. Students with hearing loss showed more instability in postural control than those with normal hearing, with significant differences between groups (stable surface, unstable surface) (p<0.001). Students with sensorineural hearing loss showed greater instability in the postural control compared to normal hearing students of the same gender and age. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  3. Effects of reverberation and noise on speech intelligibility in normal-hearing and aided hearing-impaired listeners.

    PubMed

    Xia, Jing; Xu, Buye; Pentony, Shareka; Xu, Jingjing; Swaminathan, Jayaganesh

    2018-03-01

    Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts. Once corrected for ceiling effects, the differences in the effects of reverberation on speech intelligibility between the two groups were much smaller. This suggests that, at least, part of the difference in susceptibility to reverberation between normal-hearing and hearing-impaired listeners was due to ceiling effects. Across both groups, a complex interaction between the noise characteristics and reverberation was observed on the speech intelligibility scores. Further fine-grained analyses of the perception of consonants showed that, for both listener groups, final consonants were more susceptible to reverberation than initial consonants. However, differences in the perception of specific consonant features were observed between the groups.

  4. Comparison of single-microphone noise reduction schemes: can hearing impaired listeners tell the difference?

    PubMed

    Huber, Rainer; Bisitz, Thomas; Gerkmann, Timo; Kiessling, Jürgen; Meister, Hartmut; Kollmeier, Birger

    2018-06-01

    The perceived qualities of nine different single-microphone noise reduction (SMNR) algorithms were to be evaluated and compared in subjective listening tests with normal hearing and hearing impaired (HI) listeners. Speech samples added with traffic noise or with party noise were processed by the SMNR algorithms. Subjects rated the amount of speech distortions, intrusiveness of background noise, listening effort and overall quality, using a simplified MUSHRA (ITU-R, 2003 ) assessment method. 18 normal hearing and 18 moderately HI subjects participated in the study. Significant differences between the rating behaviours of the two subject groups were observed: While normal hearing subjects clearly differentiated between different SMNR algorithms, HI subjects rated all processed signals very similarly. Moreover, HI subjects rated speech distortions of the unprocessed, noisier signals as being more severe than the distortions of the processed signals, in contrast to normal hearing subjects. It seems harder for HI listeners to distinguish between additive noise and speech distortions or/and they might have a different understanding of the term "speech distortion" than normal hearing listeners have. The findings confirm that the evaluation of SMNR schemes for hearing aids should always involve HI listeners.

  5. Effects of Noise on Speech Recognition and Listening Effort in Children with Normal Hearing and Children with Mild Bilateral or Unilateral Hearing Loss

    ERIC Educational Resources Information Center

    Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin

    2016-01-01

    Purpose: This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Method Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL,…

  6. Chinese Writing of Deaf or Hard-of-Hearing Students and Normal-Hearing Peers from Complex Network Approach

    PubMed Central

    Jin, Huiyuan; Liu, Haitao

    2016-01-01

    Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences. PMID:27920733

  7. Children who are deaf or hard of hearing in inclusive educational settings: a literature review on interactions with peers.

    PubMed

    Xie, Yu-Han; Potměšil, Miloň; Peters, Brenda

    2014-10-01

    This review is conducted to describe how children who are deaf or hard of hearing (D/HH) interact with hearing peers in inclusive settings, illustrate the difficulties and challenges faced by them in interacting with peers, and identify effective interventions that promote their social interaction in inclusive education. A systematic search of databases and journals identified 21 papers that met the inclusion criteria. Two broad themes emerged from an analysis of the literatures, which included processes and outcomes of interactions with peers and intervention programs. The research indicates that children who are D/HH face great difficulties in communicating, initiating/entering, and maintaining interactions with hearing peers in inclusive settings. The co-enrollment and social skills training programs are considered to be effective interventions for their social interaction. Communication abilities and social skills of children who are D/HH, responses of children with normal hearing, and the effect of environment are highlighted as crucial aspects of social interactions. In addition, future research is needed to study the interaction between children who are D/HH and hearing peers in natural settings, at different stages of school life, as well as improving social interaction and establishing an inclusive classroom climate for children who are D/HH. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study.

    PubMed

    Werfel, Krystal L

    2017-10-05

    The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

  9. 10 CFR 110.102 - Hearing docket.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Hearing docket. 110.102 Section 110.102 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) EXPORT AND IMPORT OF NUCLEAR EQUIPMENT AND MATERIAL Hearings § 110.102 Hearing docket. For each hearing, the Secretary will maintain a docket which will include the hearing...

  10. [Risk of noise-induced hearing loss caused by radio communication? Audiologic findings in helicopter crews and pilots of propeller airplanes].

    PubMed

    Matschke, R G

    1987-12-01

    The affects of noise on the human inner ear have been well known for a long time, and measures to prevent occupational noise-induced hearing loss show a clear reduction in the statistics of morbidity. Nevertheless, there are working environments in which the use of ear protection seems to be inapplicable, because communication by speech is indispensable, for example in the cockpit of aircraft. Noise exposure measurements were performed on pilots of helicopters and propeller-machines of the German Federal Navy during realistic flight situations. The ambient noise levels during regular flight service were maintained at levels between 89 dB and 120 dB. Sound protection by flight-helmets and headphones is not only neutralised while using radio and intercom, but the noise during radio-communication is even louder than the noise of the engines. The use of ear protection to avoid excessive noise exposure is only of limited effectiveness. While pilots with normal hearing show only little impairment of speech intelligibility, those with noise-induced hearing loss show substantial impairment that varies in proportion to their hearing loss. Communication abilities may be drastically reduced which may compromise the reliability of radio-communication. The problem may be possibly solved in future by an electronic compensation system for noise.

  11. Decision-making and outcomes of hearing help-seekers: A self-determination theory perspective.

    PubMed

    Ridgway, Jason; Hickson, Louise; Lind, Christopher

    2016-07-01

    To explore the explanatory power of a self-determination theory (SDT) model of health behaviour change for hearing aid adoption decisions and fitting outcomes. A quantitative approach was taken for this longitudinal cohort study. Participants completed questionnaires adapted from SDT that measured autonomous motivation, autonomy support, and perceived competence for hearing aids. Hearing aid fitting outcomes were obtained with the international outcomes inventory for hearing aids (IOI-HA). Sociodemographic and audiometric information was collected. Participants were 216 adult first-time hearing help-seekers (125 hearing aid adopters, 91 non-adopters). Regression models assessed the impact of autonomous motivation and autonomy support on hearing aid adoption and hearing aid fitting outcomes. Sociodemographic and audiometric factors were also taken into account. Autonomous motivation, but not autonomy support, was associated with increased hearing aid adoption. Autonomy support was associated with increased perceived competence for hearing aids, reduced activity limitation and increased hearing aid satisfaction. Autonomous motivation was positively associated with hearing aid satisfaction. The SDT model is potentially useful in understanding how hearing aid adoption decisions are made, and how hearing health behaviour is internalized and maintained over time. Autonomy supportive practitioners may improve outcomes by helping hearing aid adopters maintain internalized change.

  12. Perception of a Sung Vowel as a Function of Frequency-Modulation Rate and Excursion in Listeners with Normal Hearing and Hearing Impairment

    ERIC Educational Resources Information Center

    Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels Henrik; Dau, Torsten

    2014-01-01

    Purpose: Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss affects the perception of a sung vowel based on FM…

  13. Effect of occlusion, directionality and age on horizontal localization

    NASA Astrophysics Data System (ADS)

    Alworth, Lynzee Nicole

    Localization acuity of a given listener is dependent upon the ability discriminate between interaural time and level disparities. Interaural time differences are encoded by low frequency information whereas interaural level differences are encoded by high frequency information. Much research has examined effects of hearing aid microphone technologies and occlusion separately and prior studies have not evaluated age as a factor in localization acuity. Open-fit hearing instruments provide new earmold technologies and varying microphone capabilities; however, these instruments have yet to be evaluated with regard to horizontal localization acuity. Thus, the purpose of this study is to examine the effects of microphone configuration, type of dome in open-fit hearing instruments, and age on the horizontal localization ability of a given listener. Thirty adults participated in this study and were grouped based upon hearing sensitivity and age (young normal hearing, >50 years normal hearing, >50 hearing impaired). Each normal hearing participant completed one localization experiment (unaided/unamplified) where they listened to the stimulus "Baseball" and selected the point of origin. Hearing impaired listeners were fit with the same two receiver-in-the-ear hearing aids and same dome types, thus controlling for microphone technologies, type of dome, and fitting between trials. Hearing impaired listeners completed a total of 7 localization experiments (unaided/unamplified; open dome: omnidirectional, adaptive directional, fixed directional; micromold: omnidirectional, adaptive directional, fixed directional). Overall, results of this study indicate that age significantly affects horizontal localization ability as younger adult listeners with normal hearing made significantly fewer localization errors than older adult listeners with normal hearing. Also, results revealed a significant difference in performance between dome type; however, upon further examination was not significant. Therefore, results examining type of dome should be viewed with caution. Results examining microphone configuration and microphone configuration by dome type were not significant. Moreover, results evaluating performance relative to unaided (unamplified) were not significant. Taken together, these results suggest open-fit hearing instruments, regardless of microphone or dome type, do not degrade horizontal localization acuity within a given listener relative to their 'older aged' normal hearing counterparts in quiet environments.

  14. [Emotional response to music by postlingually-deafened adult cochlear implant users].

    PubMed

    Wang, Shuo; Dong, Ruijuan; Zhou, Yun; Li, Jing; Qi, Beier; Liu, Bo

    2012-10-01

    To assess the emotional response to music by postlingually-deafened adult cochlear implant users. Munich music questionnaire (MUMU) was used to match the music experience and the motivation of use of music between 12 normal-hearing and 12 cochlear implant subjects. Emotion rating test in Musical Sounds in Cochlear Implants (MuSIC) test battery was used to assess the emotion perception ability for both normal-hearing and cochlear implant subjects. A total of 15 pieces of music phases were used. Responses were given by selecting the rating scales from 1 to 10. "1" represents "very sad" feeling, and "10" represents "very happy feeling. In comparison with normal-hearing subjects, 12 cochlear implant subjects made less active use of music for emotional purpose. The emotion ratings for cochlear implant subjects were similar to normal-hearing subjects, but with large variability. Post-lingually deafened cochlear implant subjects on average performed similarly in emotion rating tasks relative to normal-hearing subjects, but their active use of music for emotional purpose was obviously less than normal-hearing subjects.

  15. Evaluation of gap filling skills and reading mistakes of cochlear implanted and normally hearing students.

    PubMed

    Çizmeci, Hülya; Çiprut, Ayça

    2018-06-01

    This study aimed to (1) evaluate the gap filling skills and reading mistakes of students with cochlear implants, and to (2) compare their results with those of their normal-hearing peers. The effects of implantation age and total time of cochlear implant use were analyzed in relation to the subjects' reading skills development. The study included 19 students who underwent cochlear implantation and 20 students with normal hearing, who were enrolled at the 6th to 8th grades. The subjects' ages ranged between 12 and 14 years old. Their reading skills were evaluated by using the Informal Reading Inventory. A significant relationship were found between implanted and normal-hearing students in terms of the percentages of reading error and the percentages of gap filling scores. The average order of the reading errors of students using cochlear implants was higher than that of normal-hearing students. As for the gap filling, the performances of implanted students in the passage are lower than those of their normal-hearing peers. No significant relationship was found between the variables tested in terms of age and duration of implantation on the reading performances of implanted students. Even if they were early implanted, there were significant differences in the reading performances of implanted students compared with those of their normal-hearing peers in older classes. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Hearing screening in children with skeletal dysplasia.

    PubMed

    Tunkel, David E; Kerbavaz, Richard; Smith, Beth; Rose-Hardison, Danielle; Alade, Yewande; Hoover-Fong, Julie

    2011-12-01

    To determine the prevalence of hearing loss and abnormal tympanometry in children with skeletal dysplasia. Clinical screening program. National convention of the Little People of America. Convenience sample of volunteers aged 18 years or younger with skeletal dysplasias. Hearing screening with behavioral testing and/or otoacoustic emissions, otoscopy, and tympanometry. A failed hearing screen was defined as hearing 35 dB HL (hearing level) or greater at 1 or more tested frequencies or by a "fail" otoacoustic emissions response. Types B and C tympanograms were considered abnormal. A total of 58 children (aged ≤18 years) with skeletal dysplasia enrolled, and 56 completed hearing screening. Forty-one children had normal hearing (71%); 9 failed in 1 ear (16%); and 6 failed in both ears (10%). Forty-four children had achondroplasia, and 31 had normal hearing in both ears (71%); 8 failed hearing screening in 1 ear (18%), and 3 in both ears (7%). Tympanometry was performed in 45 children, with normal tympanograms found in 21 (47%), bilateral abnormal tympanograms in 15 (33%), and unilateral abnormal tympanograms in 9 (20%). Fourteen children with achondroplasia had normal tympanograms (42%); 11 had bilateral abnormal tympanograms (33%); and 8 had unilateral abnormal tympanograms (24%). For those children without functioning tympanostomy tubes, there was a 9.5 times greater odds of hearing loss if there was abnormal tympanometry (P = .03). Hearing loss and middle-ear disease are both highly prevalent in children with skeletal dysplasias. Abnormal tympanometry is highly associated with the presence of hearing loss, as expected in children with eustachian tube dysfunction. Hearing screening with medical intervention is recommended for these children.

  17. A comparative evaluation of dental caries status among hearing-impaired and normal children of Malda, West Bengal, evaluated with the Caries Assessment Spectrum and Treatment.

    PubMed

    Kar, Sudipta; Kundu, Goutam; Maiti, Shyamal Kumar; Ghosh, Chiranjit; Bazmi, Badruddin Ahamed; Mukhopadhyay, Santanu

    2016-01-01

    Dental caries is one of the major modern-day diseases of dental hard tissue. It may affect both normal and hearing-impaired children. This study is aimed to evaluate and compare the prevalence of dental caries in hearing-impaired and normal children of Malda, West Bengal, utilizing the Caries Assessment Spectrum and Treatment (CAST). In a cross-sectional, case-control study of dental caries status of 6-12-year-old children was assessed. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries affected hearing-impaired children found to be about 30.51% compared to 15.81% in normal children, and the result was statistically significant. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group, and the result is significant at P < 0.05. Statistical analysis was carried out utilizing Z-test. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries effected hearing-impaired children found about 30.51% instead of 15.81% in normal children, and the result was statistically significant (P < 0.05). Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group. Dental health of hearing-impaired children was found unsatisfactory than normal children when studied in relation to dental caries status evaluated with CAST.

  18. Improving flexible thinking in deaf and hard of hearing children with virtual reality technology.

    PubMed

    Passig, D; Eden, S

    2000-07-01

    The study investigated whether rotating three-dimensional (3-D) objects using virtual reality (VR) will affect flexible thinking in deaf and hard of hearing children. Deaf and hard of hearing subjects were distributed into experimental and control groups. The experimental group played virtual 3-D Tetris (a game using VR technology) individually, 15 minutes once weekly over 3 months. The control group played conventional two-dimensional (2-D) Tetris over the same period. Children with normal hearing participated as a second control group in order to establish whether deaf and hard of hearing children really are disadvantaged in flexible thinking. Before-and-after testing showed significantly improved flexible thinking in the experimental group; the deaf and hard of hearing control group showed no significant improvement. Also, before the experiment, the deaf and hard of hearing children scored lower in flexible thinking than the children with normal hearing. After the experiment, the difference between the experimental group and the control group of children with normal hearing was smaller.

  19. Affective Properties of Mothers' Speech to Infants with Hearing Impairment and Cochlear Implants

    ERIC Educational Resources Information Center

    Kondaurova, Maria V.; Bergeson, Tonya R.; Xu, Huiping; Kitamura, Christine

    2015-01-01

    Purpose: The affective properties of infant-directed speech influence the attention of infants with normal hearing to speech sounds. This study explored the affective quality of maternal speech to infants with hearing impairment (HI) during the 1st year after cochlear implantation as compared to speech to infants with normal hearing. Method:…

  20. Coordination of Gaze and Speech in Communication between Children with Hearing Impairment and Normal-Hearing Peers

    ERIC Educational Resources Information Center

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2014-01-01

    Purpose: To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Method: Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions,…

  1. Story retelling skills in Persian speaking hearing-impaired children.

    PubMed

    Jarollahi, Farnoush; Mohamadi, Reyhane; Modarresi, Yahya; Agharasouli, Zahra; Rahimzadeh, Shadi; Ahmadi, Tayebeh; Keyhani, Mohammad-Reza

    2017-05-01

    Since the pragmatic skills of hearing-impaired Persian-speaking children have not yet been investigated particularly through story retelling, this study aimed to evaluate some pragmatic abilities of normal-hearing and hearing-impaired children using a story retelling test. 15 normal-hearing and 15 profound hearing-impaired 7-year-old children were evaluated using the story retelling test with the content validity of 89%, construct validity of 85%, and reliability of 83%. Three macro structure criteria including topic maintenance, event sequencing, explicitness, and four macro structure criteria including referencing, conjunctive cohesion, syntax complexity, and utterance length were assessed. The test was performed with live voice in a quiet room where children were then asked to retell the story. The tasks of the children were recorded on a tape, transcribed, scored and analyzed. In the macro structure criteria, utterances of hearing-impaired students were less consistent, enough information was not given to listeners to have a full understanding of the subject, and the story events were less frequently expressed in a rational order than those of normal-hearing group (P < 0.0001). Regarding the macro structure criteria of the test, unlike the normal-hearing students who obtained high scores, hearing-impaired students failed to gain any scores on the items of this section. These results suggest that Hearing-impaired children were not able to use language as effectively as their hearing peers, and they utilized quite different pragmatic functions. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Nonlinear frequency compression: effects on sound quality ratings of speech and music.

    PubMed

    Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-03-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.

  3. Evidence for Website Claims about the Benefits of Teaching Sign Language to Infants and Toddlers with Normal Hearing

    ERIC Educational Resources Information Center

    Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer

    2012-01-01

    The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…

  4. Army Hearing Program Talking Points Calendar Year 2016

    DTIC Science & Technology

    2017-09-12

    Reserve ARMY HEARING PROGRAM TALKING POINTS CALENDAR YEAR 2016 TIP No. 51-065-0817 2 BACKGROUND Hearing health in the Army has improved...over time, largely due to the dedicated work of hearing health experts. However, noise-induced hearing loss and associated problems have not been...eliminated. The Army Hearing Program continually evolves to address hearing health challenges, and maintains the momentum to build iteratively upon

  5. Initial Stop Voicing in Bilingual Children with Cochlear Implants and Their Typically Developing Peers with Normal Hearing

    ERIC Educational Resources Information Center

    Bunta, Ferenc; Goodin-Mayeda, C. Elizabeth; Procter, Amanda; Hernandez, Arturo

    2016-01-01

    Purpose: This study focuses on stop voicing differentiation in bilingual children with normal hearing (NH) and their bilingual peers with hearing loss who use cochlear implants (CIs). Method: Twenty-two bilingual children participated in our study (11 with NH, "M" age = 5;1 [years;months], and 11 with CIs, "M" hearing age =…

  6. False Belief Development in Children Who Are Hard of Hearing Compared with Peers with Normal Hearing

    ERIC Educational Resources Information Center

    Walker, Elizabeth A.; Ambrose, Sophie E.; Oleson, Jacob; Moeller, Mary Pat

    2017-01-01

    Purpose: This study investigates false belief (FB) understanding in children who are hard of hearing (CHH) compared with children with normal hearing (CNH) at ages 5 and 6 years and at 2nd grade. Research with this population has theoretical significance, given that the early auditory-linguistic experiences of CHH are less restricted compared with…

  7. Judgments of Emotion in Clear and Conversational Speech by Young Adults with Normal Hearing and Older Adults with Hearing Impairment

    ERIC Educational Resources Information Center

    Morgan, Shae D.; Ferguson, Sarah Hargus

    2017-01-01

    Purpose: In this study, we investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style. Method: The first experiment included 18 YNH listeners, and the second included 10 additional…

  8. How Hearing Loss and Age Affect Emotional Responses to Nonspeech Sounds

    ERIC Educational Resources Information Center

    Picou, Erin M.

    2016-01-01

    Purpose: The purpose of this study was to evaluate the effects of hearing loss and age on subjective ratings of emotional valence and arousal in response to nonspeech sounds. Method: Three groups of adults participated: 20 younger listeners with normal hearing (M = 24.8 years), 20 older listeners with normal hearing (M = 55.8 years), and 20 older…

  9. Auditory and tactile gap discrimination by observers with normal and impaired hearing.

    PubMed

    Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Perez, Zachary D; Delhorne, Lorraine A; Villabona, Timothy J

    2014-02-01

    Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.

  10. Cochlear Implants Special Issue Article: Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

    PubMed Central

    Luo, Xin; Fu, Qian-Jie; Galvin, John J.

    2007-01-01

    The present study investigated the ability of normal-hearing listeners and cochlear implant users to recognize vocal emotions. Sentences were produced by 1 male and 1 female talker according to 5 target emotions: angry, anxious, happy, sad, and neutral. Overall amplitude differences between the stimuli were either preserved or normalized. In experiment 1, vocal emotion recognition was measured in normal-hearing and cochlear implant listeners; cochlear implant subjects were tested using their clinically assigned processors. When overall amplitude cues were preserved, normal-hearing listeners achieved near-perfect performance, whereas listeners with cochlear implant recognized less than half of the target emotions. Removing the overall amplitude cues significantly worsened mean normal-hearing and cochlear implant performance. In experiment 2, vocal emotion recognition was measured in listeners with cochlear implant as a function of the number of channels (from 1 to 8) and envelope filter cutoff frequency (50 vs 400 Hz) in experimental speech processors. In experiment 3, vocal emotion recognition was measured in normal-hearing listeners as a function of the number of channels (from 1 to 16) and envelope filter cutoff frequency (50 vs 500 Hz) in acoustic cochlear implant simulations. Results from experiments 2 and 3 showed that both cochlear implant and normal-hearing performance significantly improved as the number of channels or the envelope filter cutoff frequency was increased. The results suggest that spectral, temporal, and overall amplitude cues each contribute to vocal emotion recognition. The poorer cochlear implant performance is most likely attributable to the lack of salient pitch cues and the limited functional spectral resolution. PMID:18003871

  11. Visual Field Abnormalities among Adolescent Boys with Hearing Impairments

    PubMed Central

    KHORRAMI-NEJAD, Masoud; HERAVIAN, Javad; SEDAGHAT, Mohamad-Reza; MOMENI-MOGHADAM, Hamed; SOBHANI-RAD, Davood; ASKARIZADEH, Farshad

    2016-01-01

    The aim of this study was to compare the visual field (VF) categorizations (based on the severity of VF defects) between adolescent boys with hearing impairments and those with normal hearing. This cross-sectional study involved the evaluation of the VF of 64 adolescent boys with hearing impairments and 68 age-matched boys with normal hearing at high schools in Tehran, Iran, in 2013. All subjects had an intelligence quotient (IQ) > 70. The hearing impairments were classified based on severity and time of onset. Participants underwent a complete eye examination, and the VFs were investigated using automated perimetry with a Humphrey Visual Field Analyzer. This device was used to determine their foveal threshold (FT), mean deviation (MD), and Glaucoma Hemifield Test (GHT) results. Most (50%) of the boys with hearing impairments had profound hearing impairments. There was no significant between-group difference in age (P = 0.49) or IQ (P = 0.13). There was no between-group difference in the corrected distance visual acuity (P = 0.183). According to the FT, MD, and GHT results, the percentage of boys with abnormal VFs in the hearing impairment group was significantly greater than that in the normal hearing group: 40.6% vs. 22.1%, 59.4% vs. 19.1%, and 31.2% vs. 8.8%, respectively (P < 0.0001). The mean MD in the hearing impairment group was significantly worse than that in the normal hearing group (-0.79 ± 2.04 and -4.61 ± 6.52 dB, respectively, P < 0.0001), and the mean FT was also significantly worse (38.97 ± 1.66 vs. 35.30 ± 1.43 dB, respectively, P <0.0001). Moreover, there was a significant between-group difference in the GHT results (P < 0.0001). Thus, there were higher percentages of boys with VF abnormalities and higher mean MD, FT, and GHT results among those with hearing impairments compared to those with normal hearing. These findings emphasize the need for detailed VF assessments for patients with hearing impairments. PMID:28293650

  12. Talker Differences in Clear and Conversational Speech: Perceived Sentence Clarity for Young Adults with Normal Hearing and Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Ferguson, Sarah Hargus; Morgan, Shae D.

    2018-01-01

    Purpose: The purpose of this study is to examine talker differences for subjectively rated speech clarity in clear versus conversational speech, to determine whether ratings differ for young adults with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners), and to explore effects of certain talker characteristics…

  13. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults with Normal Hearing but Not Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-01-01

    Purpose: Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method: Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13…

  14. [Communication and noise. Speech intelligibility of airplane pilots with and without active noise compensation].

    PubMed

    Matschke, R G

    1994-08-01

    Noise exposure measurements were performed with pilots of the German Federal Navy during flight situations. The ambient noise levels during regular flight were maintained at levels above a 90 dB A-weighted level. This noise intensity requires wearing ear protection to avoid sound-induced hearing loss. To be able to understand radio communication (ATC) in spite of a noisy environment, headphone volume must be raised above the noise of the engines. The use of ear plugs in addition to the headsets and flight helmets is only of limited value because personal ear protection affects the intelligibility of ATC. Whereas speech intelligibility of pilots with normal hearing is affected to only a smaller degree, pilots with pre-existing high-frequency hearing losses show substantial impairments of speech intelligibility that vary in proportion to the hearing deficit present. Communication abilities can be reduced drastically, which in turn can affect air traffic security. The development of active noise compensation devices (ANC) that make use of the "anti-noise" principle may be a solution to this dilemma. To evaluate the effectiveness of an ANC-system and its influence on speech intelligibility, speech audiometry was performed with a German standardized test during simulated flight conditions with helicopter pilots. Results demonstrate the helpful effect on speech understanding especially for pilots with noise-induced hearing losses. This may help to avoid pre-retirement professional disability.

  15. Effects of frequency discrimination training on tinnitus: results from two randomised controlled trials.

    PubMed

    Hoare, Derek J; Kowalkowski, Victoria L; Hall, Deborah A

    2012-08-01

    That auditory perceptual training may alleviate tinnitus draws on two observations: (1) tinnitus probably arises from altered activity within the central auditory system following hearing loss and (2) sound-based training can change central auditory activity. Training that provides sound enrichment across hearing loss frequencies has therefore been hypothesised to alleviate tinnitus. We tested this prediction with two randomised trials of frequency discrimination training involving a total of 70 participants with chronic subjective tinnitus. Participants trained on either (1) a pure-tone standard at a frequency within their region of normal hearing, (2) a pure-tone standard within the region of hearing loss or (3) a high-pass harmonic complex tone spanning a region of hearing loss. Analysis of the primary outcome measure revealed an overall reduction in self-reported tinnitus handicap after training that was maintained at a 1-month follow-up assessment, but there were no significant differences between groups. Secondary analyses also report the effects of different domains of tinnitus handicap on the psychoacoustical characteristics of the tinnitus percept (sensation level, bandwidth and pitch) and on duration of training. Our overall findings and conclusions cast doubt on the superiority of a purely acoustic mechanism to underpin tinnitus remediation. Rather, the nonspecific patterns of improvement are more suggestive that auditory perceptual training affects impact on a contributory mechanism such as selective attention or emotional state.

  16. Auditory brainstem responses of CBA/J mice with neonatal conductive hearing losses and treatment with GM1 ganglioside.

    PubMed

    Money, M K; Pippin, G W; Weaver, K E; Kirsch, J P; Webster, D B

    1995-07-01

    Exogenous administration of GM1 ganglioside to CBA/J mice with a neonatal conductive hearing loss ameliorates the atrophy of spiral ganglion neurons, ventral cochlear nucleus neurons, and ventral cochlear nucleus volume. The present investigation demonstrates the extent of a conductive loss caused by atresia and tests the hypothesis that GM1 ganglioside treatment will ameliorate the conductive hearing loss. Auditory brainstem responses were recorded from four groups of seven mice each: two groups received daily subcutaneous injections of saline (one group had normal hearing; the other had a conductive hearing loss); the other two groups received daily subcutaneous injections of GM1 ganglioside (one group had normal hearing; the other had a conductive hearing loss). In mice with a conductive loss, decreases in hearing sensitivity were greatest at high frequencies. The decreases were determined by comparing mean ABR thresholds of the conductive loss mice with those of normal hearing mice. The conductive hearing loss induced in the mice in this study was similar to that seen in humans with congenital aural atresias. GM1 ganglioside treatment had no significant effect on ABR wave I thresholds or latencies in either group.

  17. Normal-Hearing Listeners’ and Cochlear Implant Users’ Perception of Pitch Cues in Emotional Speech

    PubMed Central

    Fuller, Christina; Gilbers, Dicky; Broersma, Mirjam; Goudbeek, Martijn; Free, Rolien; Başkent, Deniz

    2015-01-01

    In cochlear implants (CIs), acoustic speech cues, especially for pitch, are delivered in a degraded form. This study’s aim is to assess whether due to degraded pitch cues, normal-hearing listeners and CI users employ different perceptual strategies to recognize vocal emotions, and, if so, how these differ. Voice actors were recorded pronouncing a nonce word in four different emotions: anger, sadness, joy, and relief. These recordings’ pitch cues were phonetically analyzed. The recordings were used to test 20 normal-hearing listeners’ and 20 CI users’ emotion recognition. In congruence with previous studies, high-arousal emotions had a higher mean pitch, wider pitch range, and more dominant pitches than low-arousal emotions. Regarding pitch, speakers did not differentiate emotions based on valence but on arousal. Normal-hearing listeners outperformed CI users in emotion recognition, even when presented with CI simulated stimuli. However, only normal-hearing listeners recognized one particular actor’s emotions worse than the other actors’. The groups behaved differently when presented with similar input, showing that they had to employ differing strategies. Considering the respective speaker’s deviating pronunciation, it appears that for normal-hearing listeners, mean pitch is a more salient cue than pitch range, whereas CI users are biased toward pitch range cues. PMID:27648210

  18. Hearing in Noise Test Brazil: standardization for young adults with normal hearing.

    PubMed

    Sbompato, Andressa Forlevise; Corteletti, Lilian Cassia Bornia Jacob; Moret, Adriane de Lima Mortari; Jacob, Regina Tangerino de Souza

    2015-01-01

    Individuals with the same ability of speech recognition in quiet can have extremely different results in noisy environments. To standardize speech perception in adults with normal hearing in the free field using the Brazilian Hearing in Noise Test. Contemporary, cross-sectional cohort study. 79 adults with normal hearing and without cognitive impairment participated in the study. Lists of Hearing in Noise Test sentences were randomly in quiet, noise front, noise right, and noise left. There were no significant differences between right and left ears at all frequencies tested (paired t-1 test). Nor were significant differences observed when comparing gender and interaction between these conditions. A difference was observed among the free field positions tested, except in the situations of noise right and noise left. Results of speech perception in adults with normal hearing in the free field during different listening situations in noise indicated poorer performance during the condition with noise and speech in front, i.e., 0°/0°. The values found in the standardization of the Hearing in Noise Test free field can be used as a reference in the development of protocols for tests of speech perception in noise, and for monitoring individuals with hearing impairment. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  19. Auditory, Visual, and Auditory-Visual Perceptions of Emotions by Young Children with Hearing Loss versus Children with Normal Hearing

    ERIC Educational Resources Information Center

    Most, Tova; Michaelis, Hilit

    2012-01-01

    Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…

  20. Frequency-Shift Hearing Aid

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard M.

    1994-01-01

    Proposed hearing aid maps spectrum of speech into band of lower frequencies at which ear remains sensitive. By redirecting normal speech frequencies into frequency band from 100 to 1,500 Hz, hearing aid allows people to understand normal conversation, including telephone calls. Principle operation of hearing aid adapted to other uses such as, clearing up noisy telephone or radio communication. In addition, loud-speakers more easily understood in presence of high background noise.

  1. Effects of fundamental frequency and vocal-tract length cues on sentence segregation by listeners with hearing loss

    PubMed Central

    Mackersie, Carol L.; Dewey, James; Guthrie, Lesli A.

    2011-01-01

    The purpose was to determine the effect of hearing loss on the ability to separate competing talkers using talker differences in fundamental frequency (F0) and apparent vocal-tract length (VTL). Performance of 13 adults with hearing loss and 6 adults with normal hearing was measured using the Coordinate Response Measure. For listeners with hearing loss, the speech was amplified and filtered according to the NAL-RP hearing aid prescription. Target-to-competition ratios varied from 0 to 9 dB. The target sentence was randomly assigned to the higher or lower values of F0 or VTL on each trial. Performance improved for F0 differences up to 9 and 6 semitones for people with normal hearing and hearing loss, respectively, but only when the target talker had the higher F0. Recognition for the lower F0 target improved when trial-to-trial uncertainty was removed (9-semitone condition). Scores improved with increasing differences in VTL for the normal-hearing group. On average, hearing-impaired listeners did not benefit from VTL cues, but substantial inter-subject variability was observed. The amount of benefit from VTL cues was related to the average hearing loss in the 1–3-kHz region when the target talker had the shorter VTL. PMID:21877813

  2. The Effects of Hearing Aid Directional Microphone and Noise Reduction Processing on Listening Effort in Older Adults with Hearing Loss.

    PubMed

    Desjardins, Jamie L

    2016-01-01

    Older listeners with hearing loss may exert more cognitive resources to maintain a level of listening performance similar to that of younger listeners with normal hearing. Unfortunately, this increase in cognitive load, which is often conceptualized as increased listening effort, may come at the cost of cognitive processing resources that might otherwise be available for other tasks. The purpose of this study was to evaluate the independent and combined effects of a hearing aid directional microphone and a noise reduction (NR) algorithm on reducing the listening effort older listeners with hearing loss expend on a speech-in-noise task. Participants were fitted with study worn commercially available behind-the-ear hearing aids. Listening effort on a sentence recognition in noise task was measured using an objective auditory-visual dual-task paradigm. The primary task required participants to repeat sentences presented in quiet and in a four-talker babble. The secondary task was a digital visual pursuit rotor-tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was presented separately and concurrently at a fixed overall speech recognition performance level of 50% correct with and without the directional microphone and/or the NR algorithm activated in the hearing aids. In addition, participants reported how effortful it was to listen to the sentences in quiet and in background noise in the different hearing aid listening conditions. Fifteen older listeners with mild sloping to severe sensorineural hearing loss participated in this study. Listening effort in background noise was significantly reduced with the directional microphones activated in the hearing aids. However, there was no significant change in listening effort with the hearing aid NR algorithm compared to no noise processing. Correlation analysis between objective and self-reported ratings of listening effort showed no significant relation. Directional microphone processing effectively reduced the cognitive load of listening to speech in background noise. This is significant because it is likely that listeners with hearing impairment will frequently encounter noisy speech in their everyday communications. American Academy of Audiology.

  3. Inquiring Ears Want to Know: A Fact Sheet about Your Hearing Test

    MedlinePlus

    ... track changes in hearing over time • Your hearing threshold levels (the quietest sounds you can hear) are ... Do I have normal hearing? Compare your hearing threshold levels to this scale: -10 – 25 dB 26 – ...

  4. Self-Monitoring of Listening Abilities in Normal-Hearing Children, Normal-Hearing Adults, and Children with Cochlear Implants

    PubMed Central

    Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.

    2012-01-01

    Background Self-monitoring has been shown to be an essential skill for various aspects of our lives, including our health, education, and interpersonal relationships. Likewise, the ability to monitor one’s speech reception in noisy environments may be a fundamental skill for communication, particularly for those who are often confronted with challenging listening environments, such as students and children with hearing loss. Purpose The purpose of this project was to determine if normal-hearing children, normal-hearing adults, and children with cochlear implants can monitor their listening ability in noise and recognize when they are not able to perceive spoken messages. Research Design Participants were administered an Objective-Subjective listening task in which their subjective judgments of their ability to understand sentences from the Coordinate Response Measure corpus presented in speech spectrum noise were compared to their objective performance on the same task. Study Sample Participants included 41 normal-hearing children, 35 normal-hearing adults, and 10 children with cochlear implants. Data Collection and Analysis On the Objective-Subjective listening task, the level of the masker noise remained constant at 63 dB SPL, while the level of the target sentences varied over a 12 dB range in a block of trials. Psychometric functions, relating proportion correct (Objective condition) and proportion perceived as intelligible (Subjective condition) to target/masker ratio (T/M), were estimated for each participant. Thresholds were defined as the T/M required to produce 51% correct (Objective condition) and 51% perceived as intelligible (Subjective condition). Discrepancy scores between listeners’ threshold estimates in the Objective and Subjective conditions served as an index of self-monitoring ability. In addition, the normal-hearing children were administered tests of cognitive skills and academic achievement, and results from these measures were compared to findings on the Objective-Subjective listening task. Results Nearly half of the children with normal hearing significantly overestimated their listening in noise ability on the Objective-Subjective listening task, compared to less than 9% of the adults. There was a significant correlation between age and results on the Objective-Subjective task, indicating that the younger children in the sample (age 7–12 yr) tended to overestimate their listening ability more than the adolescents and adults. Among the children with cochlear implants, eight of the 10 participants significantly overestimated their listening ability (as compared to 13 of the 24 normal-hearing children in the same age range). We did not find a significant relationship between results on the Objective-Subjective listening task and performance on the given measures of academic achievement or intelligence. Conclusions Findings from this study suggest that many children with normal hearing and children with cochlear implants often fail to recognize when they encounter conditions in which their listening ability is compromised. These results may have practical implications for classroom learning, particularly for children with hearing loss in mainstream settings. PMID:22436118

  5. Sudden onset unilateral sensorineural hearing loss after rabies vaccination.

    PubMed

    Okhovat, Saleh; Fox, Richard; Magill, Jennifer; Narula, Antony

    2015-12-15

    A 33-year-old man developed profound sudden onset right-sided hearing loss with tinnitus and vertigo, within 24 h of pretravel rabies vaccination. There was no history of upper respiratory tract infection, systemic illness, ototoxic medication or trauma, and normal otoscopic examination. Pure tone audiograms (PTA) demonstrated right-sided sensorineural hearing loss (thresholds 90-100 dB) and normal left-sided hearing. MRI internal acoustic meatus, viral serology (hepatitis B, C, HIV and cytomegalovirus) and syphilis screen were normal. Positive Epstein-Barr virus IgG, viral capsid IgG and anticochlear antibodies (anti-HSP-70) were noted. Initial treatment involved a course of high-dose oral prednisolone and acyclovir. Repeat PTAs after 12 days of treatment showed a small improvement in hearing thresholds. Salvage intratympanic steroid injections were attempted but failed to improve hearing further. Sudden onset sensorineural hearing loss (SSNHL) is an uncommon but frightening experience for patients. This is the first report of SSNHL following rabies immunisation in an adult. 2015 BMJ Publishing Group Ltd.

  6. [Social-emotional competences in deaf and hard-of-hearing toddlers – results from an empirical study with two current parent questionnaires].

    PubMed

    Hintermair, Manfred; Sarimski, Klaus; Lang, Markus

    2017-03-01

    Hearing loss in the deaf and hard of hearing (DHH) is associated with an elevated risk of problems in socio-emotional development. Early assessment is necessary to start timely interventions. The present study tested two parent questionnaires that allow evaluation of the socio-emotional development of toddlers from a competence perspective. 128 parents with DHH toddlers aged 18 to 36 months were asked to evaluate the development of their children and their own educational competences using two preliminary German adaptations of internationally well-known social-emotional assessment measures. In addition to a series of results within the normal range, the data also reveal some specific problems in the socio-emotional development of children with hearing loss. DHH toddlers in particular show more problems developing empathic competences and maintaining relations with peers. DHH toddlers with additional handicaps have a higher risk of developing socio-emotional problems. Parental responsivity proves to be important regarding the development of socio-emotional competences in toddlers. The presented data strongly confirm results available from deaf research regarding the development and promotion of DHH children. The two questionnaires used in this study provide the opportunity to evaluate socio-emotional competences in DHH toddlers and to start appropriate interventions very early.

  7. Audiometric Predictions Using SFOAE and Middle-Ear Measurements

    PubMed Central

    Ellison, John C.; Keefe, Douglas H.

    2006-01-01

    Objective The goals of the study are to determine how well stimulus-frequency otoacoustic emissions (SFOAEs) identify hearing loss, classify hearing loss as mild or moderate-severe, and correlate with pure-tone thresholds in a population of adults with normal middle-ear function. Other goals are to determine if middle-ear function as assessed by wideband acoustic transfer function (ATF) measurements in the ear canal account for the variability in normal thresholds, and if the inclusion of ATFs improves the ability of SFOAEs to identify hearing loss and predict pure-tone thresholds. Design The total suppressed SFOAE signal and its corresponding noise were recorded in 85 ears (22 normal ears and 63 ears with sensorineural hearing loss) at octave frequencies from 0.5 – 8 kHz using a nonlinear residual method. SFOAEs were recorded a second time in three impaired ears to assess repeatability. Ambient-pressure ATFs were obtained in all but one of these 85 ears, and were also obtained from an additional 31 normal-hearing subjects in whom SFOAE data were not obtained. Pure-tone air-and bone-conduction thresholds and 226-Hz tympanograms were obtained on all subjects. Normal tympanometry and the absence of air-bone gaps were used to screen subjects for normal middle-ear function. Clinical decision theory was used to assess the performance of SFOAE and ATF predictors in classifying ears as normal or impaired, and linear regression analysis was used to test the ability of SFOAE and ATF variables to predict the air-conduction audiogram. Results The ability of SFOAEs to classify ears as normal or hearing impaired was significant at all test frequencies. The ability of SFOAEs to classify impaired ears as either mild or moderate-severe was significant at test frequencies from 0.5 to 4 kHz. SFOAEs were present in cases of severe hearing loss. SFOAEs were also significantly correlated with air-conduction thresholds from 0.5 to 8 kHz. The best performance occurred using the SFOAE signal-to-noise ratio (S/N) as the predictor, and the overall best performance was at 2 kHz. The SFOAE S/N measures were repeatable to within 3.5 dB in impaired ears. The ATF measures explained up to 25% of the variance in the normal audiogram; however, ATF measures did not improve SFOAEs predictors of hearing loss except at 4 kHz. Conclusions In common with other OAE types, SFOAEs are capable of identifying the presence of hearing loss. In particular, SFOAEs performed better than distortion-product and click-evoked OAEs in predicting auditory status at 0.5 kHz; SFOAE performance was similar to that of other OAE types at higher frequencies except for a slight performance reduction at 4 kHz. Because SFOAEs were detected in ears with mild to severe cases of hearing loss they may also provide an estimate of the classification of hearing loss. Although SFOAEs were significantly correlated with hearing threshold, they do not appear to have clinical utility in predicting a specific behavioral threshold. Information on middle-ear status as assessed by ATF measures offered minimal improvement in SFOAE predictions of auditory status in a population of normal and impaired ears with normal middle-ear function. However, ATF variables did explain a significant fraction of the variability in the audiograms of normal ears, suggesting that audiometric thresholds in normal ears are partially constrained by middle-ear function as assessed by ATF tests. PMID:16230898

  8. Effects of Hearing Loss on Heart-Rate Variability and Skin Conductance Measured During Sentence Recognition in Noise

    PubMed Central

    Mackersie, Carol L.; MacPhee, Imola X.; Heldt, Emily W.

    2014-01-01

    SHORT SUMMARY (précis) Sentence recognition by participants with and without hearing loss was measured in quiet and in babble noise while monitoring two autonomic nervous system measures: heart-rate variability and skin conductance. Heart-rate variability decreased under difficult listening conditions for participants with hearing loss, but not for participants with normal hearing. Skin conductance noise reactivity was greater for those with hearing loss, than for those with normal hearing, but did not vary with the signal-to-noise ratio. Subjective ratings of workload/stress obtained after each listening condition were similar for the two participant groups. PMID:25170782

  9. Laryngeal Aerodynamics in Children with Hearing Impairment versus Age and Height Matched Normal Hearing Peers.

    PubMed

    Das, Barshapriya; Chatterjee, Indranil; Kumar, Suman

    2013-01-01

    Lack of proper auditory feedback in hearing-impaired subjects results in functional voice disorder. It is directly related to discoordination of intrinsic and extrinsic laryngeal muscles and disturbed contraction and relaxation of antagonistic muscles. A total of twenty children in the age range of 5-10 years were considered for the study. They were divided into two groups: normal hearing children and hearing aid user children. Results showed a significant difference in the vital capacity, maximum sustained phonation, and fast adduction abduction rate having equal variance for normal and hearing aid user children, respectively, but no significant difference was found in the peak flow value with being statistically significant. A reduced vital capacity in hearing aid user children suggests a limited use of the lung volume for speech production. It may be inferred from the study that the hearing aid user children have poor vocal proficiency which is reflected in their voice. The use of voicing component in hearing impaired subjects is seen due to improper auditory feedback. It was found that there was a significant difference in the vital capacity, maximum sustained phonation (MSP), and fast adduction abduction rate and no significant difference in the peak flow.

  10. Evidence of hearing loss in a “normally-hearing” college-student population

    PubMed Central

    Le Prell, C. G.; Hensley, B.N.; Campbell, K. C. M.; Hall, J. W.; Guire, K.

    2011-01-01

    We report pure-tone hearing threshold findings in 56 college students. All subjects reported normal hearing during telephone interviews, yet not all subjects had normal sensitivity as defined by well-accepted criteria. At one or more test frequencies (0.25–8 kHz), 7% of ears had thresholds ≥25 dB HL and 12% had thresholds ≥20 dB HL. The proportion of ears with abnormal findings decreased when three-frequency pure-tone-averages were used. Low-frequency PTA hearing loss was detected in 2.7% of ears and high-frequency PTA hearing loss was detected in 7.1% of ears; however, there was little evidence for “notched” audiograms. There was a statistically reliable relationship in which personal music player use was correlated with decreased hearing status in male subjects. Routine screening and education regarding hearing loss risk factors are critical as college students do not always self-identify early changes in hearing. Large-scale systematic investigations of college students’ hearing status appear to be warranted; the current sample size was not adequate to precisely measure potential contributions of different sound sources to the elevated thresholds measured in some subjects. PMID:21288064

  11. Reading vocabulary in children with and without hearing loss: the roles of task and word type.

    PubMed

    Coppens, Karien M; Tellings, Agnes; Verhoeven, Ludo; Schreuder, Robert

    2013-04-01

    To address the problem of low reading comprehension scores among children with hearing impairment, it is necessary to have a better understanding of their reading vocabulary. In this study, the authors investigated whether task and word type differentiate the reading vocabulary knowledge of children with and without severe hearing loss. Seventy-two children with hearing loss and 72 children with normal hearing performed a lexical and a use decision task. Both tasks contained the same 180 words divided over 7 clusters, each cluster containing words with a similar pattern of scores on 8 word properties (word class, frequency, morphological family size, length, age of acquisition, mode of acquisition, imageability, and familiarity). Whereas the children with normal hearing scored better on the 2 tasks than the children with hearing loss, the size of the difference varied depending on the type of task and word. Performance differences between the 2 groups increased as words and tasks became more complex. Despite delays, children with hearing loss showed a similar pattern of vocabulary acquisition as their peers with normal hearing. For the most precise assessment of reading vocabulary possible, a range of tasks and word types should be used.

  12. Delayed auditory pathway maturation and prematurity.

    PubMed

    Koenighofer, Martin; Parzefall, Thomas; Ramsebner, Reinhard; Lucas, Trevor; Frei, Klemens

    2015-06-01

    Hearing loss is the most common sensory disorder in developed countries and leads to a severe reduction in quality of life. In this uncontrolled case series, we evaluated the auditory development in patients suffering from congenital nonsyndromic hearing impairment related to preterm birth. Six patients delivered preterm (25th-35th gestational weeks) suffering from mild to profound congenital nonsyndromic hearing impairment, descending from healthy, nonconsanguineous parents and were evaluated by otoacoustic emissions, tympanometry, brainstem-evoked response audiometry, and genetic testing. All patients were treated with hearing aids, and one patient required cochlear implantation. One preterm infant (32nd gestational week) initially presented with a 70 dB hearing loss, accompanied by negative otoacoustic emissions and normal tympanometric findings. The patient was treated with hearing aids and displayed a gradual improvement in bilateral hearing that completely normalized by 14 months of age accompanied by the development of otoacoustic emission responses. Conclusions We present here for the first time a fully documented preterm patient with delayed auditory pathway maturation and normalization of hearing within 14 months of birth. Although rare, postpartum development of the auditory system should, therefore, be considered in the initial stages for treating preterm hearing impaired patients.

  13. The effects of familiarity and complexity on appraisal of complex songs by cochlear implant recipients and normal hearing adults.

    PubMed

    Gfeller, Kate; Christ, Aaron; Knutson, John; Witt, Shelley; Mehr, Maureen

    2003-01-01

    The purposes of this study were (a) to develop a test of complex song appraisal that would be suitable for use with adults who use a cochlear implant (assistive hearing device) and (b) to compare the appraisal ratings (liking) of complex songs by adults who use cochlear implants (n = 66) with a comparison group of adults with normal hearing (n = 36). The article describes the development of a computerized test for appraisal, with emphasis on its theoretical basis and the process for item selection of naturalistic stimuli. The appraisal test was administered to the 2 groups to determine the effects of prior song familiarity and subjective complexity on complex song appraisal. Comparison of the 2 groups indicates that the implant users rate 2 of 3 musical genres (country western, pop) as significantly more complex than do normal hearing adults, and give significantly less positive ratings to classical music than do normal hearing adults. Appraisal responses of implant recipients were examined in relation to hearing history, age, performance on speech perception and cognitive tests, and musical background.

  14. Rapid word-learning in normal-hearing and hearing-impaired children: effects of age, receptive vocabulary, and high-frequency amplification.

    PubMed

    Pittman, A L; Lewis, D E; Hoover, B M; Stelmachowicz, P G

    2005-12-01

    This study examined rapid word-learning in 5- to 14-year-old children with normal and impaired hearing. The effects of age and receptive vocabulary were examined as well as those of high-frequency amplification. Novel words were low-pass filtered at 4 kHz (typical of current amplification devices) and at 9 kHz. It was hypothesized that (1) the children with normal hearing would learn more words than the children with hearing loss, (2) word-learning would increase with age and receptive vocabulary for both groups, and (3) both groups would benefit from a broader frequency bandwidth. Sixty children with normal hearing and 37 children with moderate sensorineural hearing losses participated in this study. Each child viewed a 4-minute animated slideshow containing 8 nonsense words created using the 24 English consonant phonemes (3 consonants per word). Each word was repeated 3 times. Half of the 8 words were low-pass filtered at 4 kHz and half were filtered at 9 kHz. After viewing the story twice, each child was asked to identify the words from among pictures in the slide show. Before testing, a measure of current receptive vocabulary was obtained using the Peabody Picture Vocabulary Test (PPVT-III). The PPVT-III scores of the hearing-impaired children were consistently poorer than those of the normal-hearing children across the age range tested. A similar pattern of results was observed for word-learning in that the performance of the hearing-impaired children was significantly poorer than that of the normal-hearing children. Further analysis of the PPVT and word-learning scores suggested that although word-learning was reduced in the hearing-impaired children, their performance was consistent with their receptive vocabularies. Additionally, no correlation was found between overall performance and the age of identification, age of amplification, or years of amplification in the children with hearing loss. Results also revealed a small increase in performance for both groups in the extended bandwidth condition but the difference was not significant at the traditional p = 0.05 level. The ability to learn words rapidly appears to be poorer in children with hearing loss over a wide range of ages. These results coincide with the consistently poorer receptive vocabularies for these children. Neither the word-learning or receptive-vocabulary measures were related to the amplification histories of these children. Finally, providing an extended high-frequency bandwidth did not significantly improve rapid word-learning for either group with these stimuli.

  15. Speech recognition for bilaterally asymmetric and symmetric hearing aid microphone modes in simulated classroom environments.

    PubMed

    Ricketts, Todd A; Picou, Erin M

    2013-09-01

    This study aimed to evaluate the potential utility of asymmetrical and symmetrical directional hearing aid fittings for school-age children in simulated classroom environments. This study also aimed to evaluate speech recognition performance of children with normal hearing in the same listening environments. Two groups of school-age children 11 to 17 years of age participated in this study. Twenty participants had normal hearing, and 29 participants had sensorineural hearing loss. Participants with hearing loss were fitted with behind-the-ear hearing aids with clinically appropriate venting and were tested in 3 hearing aid configurations: bilateral omnidirectional, bilateral directional, and asymmetrical directional microphones. Speech recognition testing was completed in each microphone configuration in 3 environments: Talker-Front, Talker-Back, and Question-Answer situations. During testing, the location of the speech signal changed, but participants were always seated in a noisy, moderately reverberant classroom-like room. For all conditions, results revealed expected effects of directional microphones on speech recognition performance. When the signal of interest was in front of the listener, bilateral directional microphone was best, and when the signal of interest was behind the listener, bilateral omnidirectional microphone was best. Performance with asymmetric directional microphones was between the 2 symmetrical conditions. The magnitudes of directional benefits and decrements were not significantly correlated. In comparison with their peers with normal hearing, children with hearing loss performed similarly to their peers with normal hearing when fitted with directional microphones and the speech was from the front. In contrast, children with normal hearing still outperformed children with hearing loss if the speech originated from behind, even when the children were fitted with the optimal hearing aid microphone mode for the situation. Bilateral directional microphones can be effective in improving speech recognition performance for children in the classroom, as long as child is facing the talker of interest. Bilateral directional microphones, however, can impair performance if the signal originates from behind a listener. However, these data suggest that the magnitude of decrement is not predictable from an individual's benefit. The results re-emphasize the importance of appropriate switching between microphone modes so children can take full advantage of directional benefits without being hurt by directional decrements. An asymmetric fitting limits decrements, but does not lead to maximum speech recognition scores when compared with the optimal symmetrical fitting. Therefore, the asymmetric mode may not be the best option as a default fitting for children in a classroom environment. While directional microphones improve performance for children with hearing loss, their performance in most conditions continues to be impaired relative to their normal-hearing peers, particularly when the signals of interest originate from behind or from an unpredictable location.

  16. Speech Perception in Noise in Normally Hearing Children: Does Binaural Frequency Modulated Fitting Provide More Benefit than Monaural Frequency Modulated Fitting?

    PubMed

    Mukari, Siti Zamratol-Mai Sarah; Umat, Cila; Razak, Ummu Athiyah Abdul

    2011-07-01

    The aim of the present study was to compare the benefit of monaural versus binaural ear-level frequency modulated (FM) fitting on speech perception in noise in children with normal hearing. Reception threshold for sentences (RTS) was measured in no-FM, monaural FM, and binaural FM conditions in 22 normally developing children with bilateral normal hearing, aged 8 to 9 years old. Data were gathered using the Pediatric Malay Hearing in Noise Test (P-MyHINT) with speech presented from front and multi-talker babble presented from 90°, 180°, 270° azimuths in a sound treated booth. The results revealed that the use of either monaural or binaural ear level FM receivers provided significantly better mean RTSs than the no-FM condition (P<0.001). However, binaural FM did not produce a significantly greater benefit in mean RTS than monaural fitting. The benefit of binaural over monaural FM varies across individuals; while binaural fitting provided better RTSs in about 50% of study subjects, there were those in whom binaural fitting resulted in either deterioration or no additional improvement compared to monaural FM fitting. The present study suggests that the use of monaural ear-level FM receivers in children with normal hearing might provide similar benefit as binaural use. Individual subjects' variations of binaural FM benefit over monaural FM suggests that the decision to employ monaural or binaural fitting should be individualized. It should be noted however, that the current study recruits typically developing normal hearing children. Future studies involving normal hearing children with high risk of having difficulty listening in noise is indicated to see if similar findings are obtained.

  17. Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing.

    PubMed

    Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray

    2016-01-01

    To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.

  18. Categorical loudness scaling and equal-loudness contours in listeners with normal hearing and hearing loss

    PubMed Central

    Rasetshwane, Daniel M.; Trevino, Andrea C.; Gombert, Jessa N.; Liebig-Trehearn, Lauren; Kopun, Judy G.; Jesteadt, Walt; Neely, Stephen T.; Gorga, Michael P.

    2015-01-01

    This study describes procedures for constructing equal-loudness contours (ELCs) in units of phons from categorical loudness scaling (CLS) data and characterizes the impact of hearing loss on these estimates of loudness. Additionally, this study developed a metric, level-dependent loudness loss, which uses CLS data to specify the deviation from normal loudness perception at various loudness levels and as function of frequency for an individual listener with hearing loss. CLS measurements were made in 87 participants with hearing loss and 61 participants with normal hearing. An assessment of the reliability of CLS measurements was conducted on a subset of the data. CLS measurements were reliable. There was a systematic increase in the slope of the low-level segment of the CLS functions with increase in the degree of hearing loss. ELCs derived from CLS measurements were similar to standardized ELCs (International Organization for Standardization, ISO 226:2003). The presence of hearing loss decreased the vertical spacing of the ELCs, reflecting loudness recruitment and reduced cochlear compression. Representing CLS data in phons may lead to wider acceptance of CLS measurements. Like the audiogram that specifies hearing loss at threshold, level-dependent loudness loss describes deficit for suprathreshold sounds. Such information may have implications for the fitting of hearing aids. PMID:25920842

  19. Sensory-motor relationships in speech production in post-lingually deaf cochlear-implanted adults and normal-hearing seniors: Evidence from phonetic convergence and speech imitation.

    PubMed

    Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc

    2017-07-01

    Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Improving Mobile Phone Speech Recognition by Personalized Amplification: Application in People with Normal Hearing and Mild-to-Moderate Hearing Loss.

    PubMed

    Kam, Anna Chi Shan; Sung, John Ka Keung; Lee, Tan; Wong, Terence Ka Cheong; van Hasselt, Andrew

    In this study, the authors evaluated the effect of personalized amplification on mobile phone speech recognition in people with and without hearing loss. This prospective study used double-blind, within-subjects, repeated measures, controlled trials to evaluate the effectiveness of applying personalized amplification based on the hearing level captured on the mobile device. The personalized amplification settings were created using modified one-third gain targets. The participants in this study included 100 adults of age between 20 and 78 years (60 with age-adjusted normal hearing and 40 with hearing loss). The performance of the participants with personalized amplification and standard settings was compared using both subjective and speech-perception measures. Speech recognition was measured in quiet and in noise using Cantonese disyllabic words. Subjective ratings on the quality, clarity, and comfortableness of the mobile signals were measured with an 11-point visual analog scale. Subjective preferences of the settings were also obtained by a paired-comparison procedure. The personalized amplification application provided better speech recognition via the mobile phone both in quiet and in noise for people with hearing impairment (improved 8 to 10%) and people with normal hearing (improved 1 to 4%). The improvement in speech recognition was significantly better for people with hearing impairment. When the average device output level was matched, more participants preferred to have the individualized gain than not to have it. The personalized amplification application has the potential to improve speech recognition for people with mild-to-moderate hearing loss, as well as people with normal hearing, in particular when listening in noisy environments.

  1. The Usher lifestyle survey: maintaining independence: a multi-centre study.

    PubMed

    Damen, Godelieve W J A; Krabbe, Paul F M; Kilsby, M; Mylanus, Emmanuel A M

    2005-12-01

    Patients with Usher syndrome face a special set of challenges in order to maintain their independence when their sight and hearing worsen. Three different types of Usher (I, II and III) are distinguished by differences in onset, progression and severity of hearing loss, and by the presence or absence of balance problems. In this study 93 Usher patients from seven European countries filled out a questionnaire on maintaining independence (60 patients type I, 25 patients type II, four patients type III and four patients type unknown). Results of Usher type I and II patients are presented. Following the Nordic definition of maintaining independence in deaf-blindness, three domains are investigated: access to information, communication and mobility. Research variables in this study are: age and type of Usher, considered hearing loss- and the number of retinitis pigmentosa-related sight problems. Usher type I patients tend to need more help than Usher type II patients and the amount of help that they need grows when patients get older or when considered hearing loss worsens. No patterns in results were seen for the number of retinitis pigmentosa related sight problems.

  2. Upward spread of informational masking in normal-hearing and hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Alexander, Joshua M.; Lutfi, Robert A.

    2003-04-01

    Thresholds for pure-tone signals of 0.8, 2.0, and 5.0 kHz were measured in the presence of a simultaneous multitone masker in 15 normal-hearing and 8 hearing-impaired listeners. The masker consisted of fixed-frequency tones ranging from 522-8346 Hz at 1/3-octave intervals, excluding the 2/3-octave interval on either side of the signal. Masker uncertainty was manipulated by independently and randomly playing individual masker tones with probability p=0.5 or p=1.0 on each trial. Informational masking (IM) was estimated by the threshold difference (p=0.5 minus p=1.0). Decision weights were estimated from correlations of the listener's response with the occurrence of the signal and individual masker components on each trial. IM was greater for normal-hearing listeners than for hearing-impaired listeners, and most listeners had at least 10 dB of IM for one of the signal frequencies. For both groups, IM increased as the number of masker components below the signal frequency increased. Decision weights were also similar for both groups-masker frequencies below the signal were weighted more than those above. Implications are that normal-hearing and hearing-impaired individuals do not weight information differently in these masking conditions and that factors associated with listening may be partially responsible for the greater effectiveness of low-frequency maskers. [Work supported by NIDCD.

  3. Toward a Nonspeech Test of Auditory Cognition: Semantic Context Effects in Environmental Sound Identification in Adults of Varying Age and Hearing Abilities

    PubMed Central

    Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian

    2016-01-01

    Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791

  4. Perception of Musical Emotion in the Students with Cognitive and Acquired Hearing Loss.

    PubMed

    Mazaheryazdi, Malihe; Aghasoleimani, Mina; Karimi, Maryam; Arjmand, Pirooz

    2018-01-01

    Hearing loss can affect the perception of emotional reaction to the music. The present study investigated whether the students with congenital hearing loss exposed to the deaf culture, percept the same emotion from the music as students with acquired hearing loss. Participants were divided into two groups; 30 students with bilaterally congenital moderate to severe hearing loss that were selected from deaf schools located in Tehran, Iran and 30 students with an acquired hearing loss with the same degree of hearing loss selected from Amiralam Hospital, Tehran, Iran and compared with the group of 30 age and gender-matched normal hearing subjects served our control in 2012. The musical stimuli consisted of three different sequences of music, (sadness, happiness, and fear) each with the duration of 60 sec. The students were asked to point to the lists of words that best matched with their emotions. Emotional perception of sadness, happiness, and fear in congenital hearing loss children was significantly poorly than acquired hearing loss and normal hearing group ( P <0.001). There was no significant difference in the emotional perception of sadness, happiness, and fear among the group of acquired hearing loss and normal hearing group ( P =0.75), ( P =1) and ( P =0.16) respectively. Neural plasticity induced by hearing assistant devises may be affected by the time when a hearing aid was first fitted and how the auditory system responds to the reintroduction of certain sounds via amplification. Therefore, children who experienced auditory input of different sound patterns in their early childhood will show more perceptual flexibility in different situations than the children with congenital hearing loss and Deaf culture.

  5. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing

    PubMed Central

    Papsin, Blake C.; Paludetti, Gaetano; Gordon, Karen A.

    2015-01-01

    Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in cochlear implant technology. PMID:26317976

  6. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing.

    PubMed

    Giannantonio, Sara; Polonenko, Melissa J; Papsin, Blake C; Paludetti, Gaetano; Gordon, Karen A

    2015-01-01

    Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in cochlear implant technology.

  7. Thin and open vessel windows for intra-vital fluorescence imaging of murine cochlear blood flow

    PubMed Central

    Shi, Xiaorui; Zhang, Fei; Urdang, Zachary; Dai, Min; Neng, Lingling; Zhang, Jinhui; Chen, Songlin; Ramamoorthy, Sripriya; Nuttall, Alfred L.

    2014-01-01

    Normal microvessel structure and function in the cochlea is essential for maintaining the ionic and metabolic homeostasis required for hearing function. Abnormal cochlear microcirculation has long been considered an etiologic factor in hearing disorders. A better understanding of cochlear blood flow (CoBF) will enable more effective amelioration of hearing disorders that result from aberrant blood flow. However, establishing the direct relationship between CoBF and other cellular events in the lateral wall and response to physio-pathological stress remains a challenge due to the lack of feasible interrogation methods and difficulty in accessing the inner ear. Here we report on new methods for studying the CoBF in a mouse model using a thin or open vessel-window in combination with fluorescence intra-vital microscopy (IVM). An open vessel-window enables investigation of vascular cell biology and blood flow permeability, including pericyte (PC) contractility, bone marrow cell migration, and endothelial barrier leakage, in wild type and fluorescent protein-labeled transgenic mouse models with high spatial and temporal resolution. Alternatively, the thin vessel-window method minimizes disruption of the homeostatic balance in the lateral wall and enables study CoBF under relatively intact physiological conditions. A thin vessel-window method can also be used for time-based studies of physiological and pathological processes. Although the small size of the mouse cochlea makes surgery difficult, the methods are sufficiently developed for studying the structural and functional changes in CoBF under normal and pathological conditions. PMID:24780131

  8. Safety of the HyperSound® Audio System in Subjects with Normal Hearing.

    PubMed

    Mehta, Ritvik P; Mattson, Sara L; Kappus, Brian A; Seitzman, Robin L

    2015-06-11

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.

  9. Safety of the HyperSound® Audio System in Subjects with Normal Hearing

    PubMed Central

    Mattson, Sara L.; Kappus, Brian A.; Seitzman, Robin L.

    2015-01-01

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions. PMID:26779330

  10. The Effect of Tinnitus on Listening Effort in Normal-Hearing Young Adults: A Preliminary Study

    ERIC Educational Resources Information Center

    Degeest, Sofie; Keppler, Hannah; Corthals, Paul

    2017-01-01

    Purpose: The objective of this study was to investigate the effect of chronic tinnitus on listening effort. Method: Thirteen normal-hearing young adults with chronic tinnitus were matched with a control group for age, gender, hearing thresholds, and educational level. A dual-task paradigm was used to evaluate listening effort in different…

  11. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    ERIC Educational Resources Information Center

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  12. Effects of Age and Hearing Loss on Gap Detection and the Precedence Effect: Broadband Stimuli

    ERIC Educational Resources Information Center

    Roberts, Richard A.; Lister, Jennifer J.

    2004-01-01

    Older listeners with normal-hearing sensitivity and impaired-hearing sensitivity often demonstrate poorer-than-normal performance on tasks of speech understanding in noise and reverberation. Deficits in temporal resolution and in the precedence effect may underlie this difficulty. Temporal resolution is often studied by means of a gap-detection…

  13. The effect of different cochlear implant microphones on acoustic hearing individuals’ binaural benefits for speech perception in noise

    PubMed Central

    Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.

    2011-01-01

    Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the directional microphone when the speech and masker were spatially separated, emphasizing the importance of measuring binaural benefits separately for each HRTF. Evaluation of binaural benefits indicated that binaural squelch and spatial release from masking were found for all HRTFs and binaural summation was found for all but one HRTF, although binaural summation was less robust than the other types of binaural benefits. Additionally, the results indicated that neither interaural time nor level cues dominated binaural benefits for the normal hearing participants. Conclusions This study provides a means to measure the degree to which cochlear implant microphones affect acoustic hearing with respect to speech perception in noise. It also provides measures that can be used to evaluate the independent contributions of interaural time and level cues. These measures provide tools that can aid researchers in understanding and improving binaural benefits in acoustic hearing individuals listening via cochlear implant microphones. PMID:21412155

  14. The impact of aging and hearing status on verbal short-term memory.

    PubMed

    Verhaegen, Clémence; Collette, Fabienne; Majerus, Steve

    2014-01-01

    The aim of this study is to assess the impact of hearing status on age-related decrease in verbal short-term memory (STM) performance. This was done by administering a battery of verbal STM tasks to elderly and young adult participants matched for hearing thresholds, as well as to young normal-hearing control participants. The matching procedure allowed us to assess the importance of hearing loss as an explanatory factor of age-related STM decline. We observed that elderly participants and hearing-matched young participants showed equal levels of performance in all verbal STM tasks, and performed overall lower than the normal-hearing young control participants. This study provides evidence for recent theoretical accounts considering reduced hearing level as an important explanatory factor of poor auditory-verbal STM performance in older adults.

  15. Recognition and production of emotions in children with cochlear implants.

    PubMed

    Mildner, Vesna; Koska, Tena

    2014-01-01

    The aim of this study was to examine auditory recognition and vocal production of emotions in three prelingually bilaterally profoundly deaf children aged 6-7 who received cochlear implants before age 2, and compare them with age-matched normally hearing children. No consistent advantage was found for the normally hearing participants. In both groups, sadness was recognized best and disgust was the most difficult. Confusion matrices among other emotions (anger, happiness, and fear) showed that children with and without hearing impairment may rely on different cues. Both groups of children showed that perception is superior to production. Normally hearing children were more successful in the production of sadness, happiness, and fear, but not anger or disgust. The data set is too small to draw any definite conclusions, but it seems that a combination of early implantation and regular auditory-oral-based therapy enables children with cochlear implants to process and produce emotional content comparable with children with normal hearing.

  16. 10 CFR 2.303 - Docket.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... COMMISSION AGENCY RULES OF PRACTICE AND PROCEDURE Rules of General Applicability: Hearing Requests, Petitions... Powers, and General Hearing Management for NRC Adjudicatory Hearings § 2.303 Docket. The Secretary shall..., as appropriate. The Secretary shall maintain all files and records of proceedings, including...

  17. 10 CFR 2.303 - Docket.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... COMMISSION AGENCY RULES OF PRACTICE AND PROCEDURE Rules of General Applicability: Hearing Requests, Petitions... Powers, and General Hearing Management for NRC Adjudicatory Hearings § 2.303 Docket. The Secretary shall..., as appropriate. The Secretary shall maintain all files and records of proceedings, including...

  18. [Examination of relationship between level of hearing and written language skills in 10-14-year-old hearing impaired children].

    PubMed

    Turğut, Nedim; Karlıdağ, Turgut; Başar, Figen; Yalçın, Şinasi; Kaygusuz, İrfan; Keleş, Erol; Birkent, Ömer Faruk

    2015-01-01

    This study aims to review the relationship between written language skills and factors which are thought to affect this skill such as mean hearing loss, duration of auditory deprivation, speech discrimination score, and pre-school education attendance and socioeconomic status of hearing impaired children who attend 4th-7th grades in primary school in inclusive environment. The study included 25 hearing impaired children (14 males, 11 females; mean age 11.4±1.4 years; range 10 to 14 years) (study group) and 20 children (9 males, 11 females; mean age 11.5±1.3 years; range 10 to 14 years) (control group) with normal hearing in the same age group and studying in the same class. Study group was separated into two subgroups as group 1a and group 1b since some of the children with hearing disability used hearing aid while some used cochlear implant. Intragroup comparisons and relational screening were performed for those who use hearing aids and cochlear implants. Intergroup comparisons were performed to evaluate the effect of the parameters on written language skills. Written expression skill level of children with hearing disability was significantly lower than their normal hearing peers (p=0.001). A significant relationship was detected between written language skills and mean hearing loss (p=0.048), duration of auditory deprivation (p=0.021), speech discrimination score (p=0.014), and preschool attendance (p=0.005), when it comes to socioeconomic status we were not able to find any significant relationship (p=0.636). It can be said that hearing loss affects written language skills negatively and hearing impaired individuals develop low-level written language skills compared to their normal hearing peers.

  19. Selective attention in normal and impaired hearing.

    PubMed

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  20. Selective Attention in Normal and Impaired Hearing

    PubMed Central

    Shinn-Cunningham, Barbara G.; Best, Virginia

    2008-01-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202

  1. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    PubMed

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  2. The effect of noise-induced hearing loss on the intelligibility of speech in noise

    NASA Astrophysics Data System (ADS)

    Smoorenburg, G. F.; Delaat, J. A. P. M.; Plomp, R.

    1981-06-01

    Speech reception thresholds, both in quiet and in noise, and tone audiograms were measured for 14 normal ears (7 subjects) and 44 ears (22 subjects) with noise-induced hearing loss. Maximum hearing loss in the 4-6 kHz region equalled 40 to 90 dB (losses exceeded by 90% and 10%, respectively). Hearing loss for speech in quiet measured with respect to the median speech reception threshold for normal ears ranged from 1.8 dB to 13.4 dB. For speech in noise the numbers are 1.2 dB to 7.0 dB which means that the subjects with noise-induced hearing loss need a 1.2 to 7.0 dB higher signal-to-noise ratio than normal to understand sentences equally well. A hearing loss for speech of 1 dB corresponds to a decrease in sentence intelligibility of 15 to 20%. The relation between hearing handicap conceived as a reduced ability to understand speech and tone audiogram is discussed. The higher signal-to-noise ratio needed by people with noise-induced hearing loss to understand speech in noisy environments is shown to be due partly to the decreased bandwidth of their hearing caused by the noise dip.

  3. Maintaining Medicare HMO's: Problems, Protections and Prospects. Hearing before the Select Committee on Aging. House of Representatives, One Hundredth Congress, First Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Select Committee on Aging.

    This document contains witness testimonies and prepared statements from the Congressional hearing called to examine issues involved in maintaining and strengthening Medicare Health Maintenance Orgranizations (HMO). Opening statements are included from Representatives Edward Roybal, Matthew Rinaldo, Mario Biaggi, Don Bonker, Robert Borski, Louise…

  4. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  5. Glimpsing Speech in the Presence of Nonsimultaneous Amplitude Modulations from a Competing Talker: Effect of Modulation Rate, Age, and Hearing Loss

    ERIC Educational Resources Information Center

    Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.

    2016-01-01

    Purpose: This study investigated how listeners process acoustic cues preserved during sentences interrupted by nonsimultaneous noise that was amplitude modulated by a competing talker. Method: Younger adults with normal hearing and older adults with normal or impaired hearing listened to sentences with consonants or vowels replaced with noise…

  6. Sad and happy emotion discrimination in music by children with cochlear implants.

    PubMed

    Hopyan, Talar; Manno, Francis A M; Papsin, Blake C; Gordon, Karen A

    2016-01-01

    Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.

  7. A comparison of the effects of filtering and sensorineural hearing loss on patients of consonant confusions.

    PubMed

    Wang, M D; Reed, C M; Bilger, R C

    1978-03-01

    It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.

  8. Tinnitus in normally hearing patients: clinical aspects and repercussions.

    PubMed

    Sanchez, Tanit Ganz; Medeiros, Italo Roberto Torres de; Levy, Cristiane Passos Dias; Ramalho, Jeanne da Rosa Oiticica; Bento, Ricardo Ferreira

    2005-01-01

    Patients with tinnitus and normal hearing constitute an important group, given that findings do not suffer influence of the hearing loss. However, this group is rarely studied, so we do not know whether its clinical characteristics and interference in daily life are the same of those of the patients with tinnitus and hearing loss. To compare tinnitus characteristics and interference in daily life among patients with and without hearing loss. Historic cohort. Among 744 tinnitus patients seen at a Tinnitus Clinic, 55 with normal audiometry were retrospectively evaluated. The control group consisted of 198 patients with tinnitus and hearing loss, following the same protocol. We analyzed the patients' data as well as the tinnitus characteristics and interference in daily life. The mean age of the studied group (43.1 +/- 13.4 years) was significantly lower than that of the control group (49.9 +/- 14.5 years). In both groups, tinnitus was predominant in women, bilateral, single tone and constant, but there were no differences between both groups. The interference in concentration and emotional status (25.5% and 36.4%) was significantly lower in the studied group than that of the control group (46% and 61.6%), but it did not happen in regard to interference over sleep and social life. Patients with tinnitus and normal hearing showed similar characteristics when compared to those with hearing loss. However, the age of the patients and the interference over concentration and emotional status were significantly lower in this group.

  9. Consonant-recognition patterns and self-assessment of hearing handicap.

    PubMed

    Hustedde, C G; Wiley, T L

    1991-12-01

    Two companion experiments were conducted with normal-hearing subjects and subjects with high-frequency, sensorineural hearing loss. In Experiment 1, the validity of a self-assessment device of hearing handicap was evaluated in two groups of hearing-impaired listeners with significantly different consonant-recognition ability. Data for the Hearing Performance Inventory--Revised (Lamb, Owens, & Schubert, 1983) did not reveal differences in self-perceived handicap for the two groups of hearing-impaired listeners; it was sensitive to perceived differences in hearing abilities for listeners who did and did not have a hearing loss. Experiment 2 was aimed at evaluation of consonant error patterns that accounted for observed group differences in consonant-recognition ability. Error patterns on the Nonsense-Syllable Test (NST) across the two subject groups differed in both degree and type of error. Listeners in the group with poorer NST performance always demonstrated greater difficulty with selected low-frequency and high-frequency syllables than did listeners in the group with better NST performance. Overall, the NST was sensitive to differences in consonant-recognition ability for normal-hearing and hearing-impaired listeners.

  10. The effects of elevated hearing thresholds on performance in a paintball simulation of individual dismounted combat.

    PubMed

    Sheffield, Benjamin; Brungart, Douglas; Tufts, Jennifer; Ness, James

    2017-01-01

    To examine the relationship between hearing acuity and operational performance in simulated dismounted combat. Individuals wearing hearing loss simulation systems competed in a paintball-based exercise where the objective was to be the last player remaining. Four hearing loss profiles were tested in each round (no hearing loss, mild, moderate and severe) and four rounds were played to make up a match. This allowed counterbalancing of simulated hearing loss across participants. Forty-three participants across two data collection sites (Fort Detrick, Maryland and the United States Military Academy, New York). All participants self-reported normal hearing except for two who reported mild hearing loss. Impaired hearing had a greater impact on the offensive capabilities of participants than it did on their "survival", likely due to the tendency for individuals with simulated impairment to adopt a more conservative behavioural strategy than those with normal hearing. These preliminary results provide valuable insights into the impact of impaired hearing on combat effectiveness, with implications for the development of improved auditory fitness-for-duty standards, the establishment of performance requirements for hearing protection technologies, and the refinement of strategies to train military personnel on how to use hearing protection in combat environments.

  11. Evaluation of a multi-channel algorithm for reducing transient sounds.

    PubMed

    Keshavarzi, Mahmoud; Baer, Thomas; Moore, Brian C J

    2018-05-15

    The objective was to evaluate and select appropriate parameters for a multi-channel transient reduction (MCTR) algorithm for detecting and attenuating transient sounds in speech. In each trial, the same sentence was played twice. A transient sound was presented in both sentences, but its level varied across the two depending on whether or not it had been processed by the MCTR and on the "strength" of the processing. The participant indicated their preference for which one was better and by how much in terms of the balance between the annoyance produced by the transient and the audibility of the transient (they were told that the transient should still be audible). Twenty English-speaking participants were tested, 10 with normal hearing and 10 with mild-to-moderate hearing-impairment. Frequency-dependent linear amplification was provided for the latter. The results for both participant groups indicated that sounds processed using the MCTR were preferred over the unprocessed sounds. For the hearing-impaired participants, the medium and strong settings of the MCTR were preferred over the weak setting. The medium and strong settings of the MCTR reduced the annoyance produced by the transients while maintaining their audibility.

  12. [Relationship between the Mandarin acceptable noise level and the personality traits in normal hearing adults].

    PubMed

    Wu, Dan; Chen, Jian-yong; Wang, Shuo; Zhang, Man-hua; Chen, Jing; Li, Yu-ling; Zhang, Hua

    2013-03-01

    To evaluate the relationship between the Mandarin acceptable noise level (ANL) and the personality trait for normal-hearing adults. Eighty-five Mandarin speakers, aged from 21 to 27, participated in this study. ANL materials and the Eysenck Personality Questionnaire (EPQ) questionnaire were used to test the acceptable noise level and the personality trait for normal-hearing subjects. SPSS 17.0 was used to analyze the results. ANL were (7.8 ± 2.9) dB in normal hearing participants. The P and N scores in EPQ were significantly correlated with ANL (r = 0.284 and 0.318, P < 0.01). No significant correlations were found between ANL and E and L scores (r = -0.036 and -.167, P > 0.05). Listeners with higher ANL were more likely to be eccentric, hostile, aggressive, and instabe, no ANL differences were found in listeners who were different in introvert-extravert or lying.

  13. Strategic Planning to Improve EHDI Programs

    ERIC Educational Resources Information Center

    White, Karl R.; Blaiser, Kristina M.

    2011-01-01

    Because newborn hearing screening has become the standard of care in the United States, every state has established an early hearing detection and intervention (EHDI) program responsible for establishing, maintaining, and improving the system of services needed to serve children with hearing loss and their families. While significant developments…

  14. Childhood Otitis Media: A Cohort Study With 30-Year Follow-Up of Hearing (The HUNT Study).

    PubMed

    Aarhus, Lisa; Tambs, Kristian; Kvestad, Ellen; Engdahl, Bo

    2015-01-01

    To study the extent to which otitis media (OM) in childhood is associated with adult hearing thresholds. Furthermore, to study whether the effects of OM on adult hearing thresholds are moderated by age or noise exposure. Population-based cohort study of 32,786 participants who had their hearing tested by pure-tone audiometry in primary school and again at ages ranging from 20 to 56 years. Three thousand sixty-six children were diagnosed with hearing loss; the remaining sample had normal childhood hearing. Compared with participants with normal childhood hearing, those diagnosed with childhood hearing loss caused by otitis media with effusion (n = 1255), chronic suppurative otitis media (CSOM; n = 108), or hearing loss after recurrent acute otitis media (rAOM; n = 613) had significantly increased adult hearing thresholds in the whole frequency range (2 dB/17-20 dB/7-10 dB, respectively). The effects were adjusted for age, sex, and noise exposure. Children diagnosed with hearing loss after rAOM had somewhat improved hearing thresholds as adults. The effects of CSOM and hearing loss after rAOM on adult hearing thresholds were larger in participants tested in middle adulthood (ages 40 to 56 years) than in those tested in young adulthood (ages 20 to 40 years). Eardrum pathology added a marginally increased risk of adult hearing loss (1-3 dB) in children with otitis media with effusion or hearing loss after rAOM. The study could not reveal significant differences in the effect of self-reported noise exposure on adult hearing thresholds between the groups with OM and the group with normal childhood hearing. This cohort study indicates that CSOM and rAOM in childhood are associated with adult hearing loss, underlining the importance of optimal treatment in these conditions. It appears that ears with a subsequent hearing loss after OM in childhood age at a faster rate than those without; however this should be confirmed by studies with several follow-up tests through adulthood.

  15. The MYC Road to Hearing Restoration

    PubMed Central

    Kopecky, Benjamin; Fritzsch, Bernd

    2012-01-01

    Current treatments for hearing loss, the most common neurosensory disorder, do not restore perfect hearing. Regeneration of lost organ of Corti hair cells through forced cell cycle re-entry of supporting cells or through manipulation of stem cells, both avenues towards a permanent cure, require a more complete understanding of normal inner ear development, specifically the balance of proliferation and differentiation required to form and to maintain hair cells. Direct successful alterations to the cell cycle result in cell death whereas regulation of upstream genes is insufficient to permanently alter cell cycle dynamics. The Myc gene family is uniquely situated to synergize upstream pathways into downstream cell cycle control. There are three Mycs that are embedded within the Myc/Max/Mad network to regulate proliferation. The function of the two ear expressed Mycs, N-Myc and L-Myc were unknown less than two years ago and their therapeutic potentials remain speculative. In this review, we discuss the roles the Mycs play in the body and what led us to choose them to be our candidate gene for inner ear therapies. We will summarize the recently published work describing the early and late effects of N-Myc and L-Myc on hair cell formation and maintenance. Lastly, we detail the translational significance of our findings and what future work must be performed to make the ultimate hearing aid: the regeneration of the organ of Corti. PMID:24710525

  16. Vowel perception by noise masked normal-hearing young adults

    NASA Astrophysics Data System (ADS)

    Richie, Carolyn; Kewley-Port, Diane; Coughlin, Maureen

    2005-08-01

    This study examined vowel perception by young normal-hearing (YNH) adults, in various listening conditions designed to simulate mild-to-moderate sloping sensorineural hearing loss. YNH listeners were individually age- and gender-matched to young hearing-impaired (YHI) listeners tested in a previous study [Richie et al., J. Acoust. Soc. Am. 114, 2923-2933 (2003)]. YNH listeners were tested in three conditions designed to create equal audibility with the YHI listeners; a low signal level with and without a simulated hearing loss, and a high signal level with a simulated hearing loss. Listeners discriminated changes in synthetic vowel tokens /smcapi e ɛ invv æ/ when F1 or F2 varied in frequency. Comparison of YNH with YHI results failed to reveal significant differences between groups in terms of performance on vowel discrimination, in conditions of similar audibility by using both noise masking to elevate the hearing thresholds of the YNH and applying frequency-specific gain to the YHI listeners. Further, analysis of learning curves suggests that while the YHI listeners completed an average of 46% more test blocks than YNH listeners, the YHI achieved a level of discrimination similar to that of the YNH within the same number of blocks. Apparently, when age and gender are closely matched between young hearing-impaired and normal-hearing adults, performance on vowel tasks may be explained by audibility alone.

  17. Infant vocalizations and the early diagnosis of severe hearing impairment.

    PubMed

    Eilers, R E; Oller, D K

    1994-02-01

    To determine whether late onset of canonical babbling could be used as a criterion to determine risk of hearing impairment, we obtained vocalization samples longitudinally from 94 infants with normal hearing and 37 infants with severe to profound hearing impairment. Parents were instructed to report the onset of canonical babbling (the production of well-formed syllables such as "da," "na," "bee," "yaya"). Verification that the infants were producing canonical syllables was collected in laboratory audio recordings. Infants with normal hearing produced canonical vocalizations before 11 months of age (range, 3 to 10 months; mode, 7 months); infants who were deaf failed to produce canonical syllables until 11 months of age or older, often well into the third year of life (range, 11 to 49 months; mode, 24 months). The correlation between age at onset of the canonical stage and age at auditory amplification was 0.68, indicating that early identification and fitting of hearing aids is of significant benefit to infants learning language. The fact that there is no overlap in the distribution of the onset of canonical babbling between infants with normal hearing and infants with hearing impairment means that the failure of otherwise healthy infants to produce canonical syllables before 11 months of age should be considered a serious risk factor for hearing impairment and, when observed, should result in immediate referral for audiologic evaluation.

  18. Altered Brain Functional Activity in Infants with Congenital Bilateral Severe Sensorineural Hearing Loss: A Resting-State Functional MRI Study under Sedation.

    PubMed

    Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen

    2017-01-01

    Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.

  19. Masking Release in Children and Adults With Hearing Loss When Using Amplification

    PubMed Central

    McCreery, Ryan; Kopun, Judy; Lewis, Dawna; Alexander, Joshua; Stelmachowicz, Patricia

    2016-01-01

    Purpose This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. Method Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator. Results Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression. Conclusions The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed. PMID:26540194

  20. Static and dynamic balance of children and adolescents with sensorineural hearing loss.

    PubMed

    Melo, Renato de Souza; Marinho, Sônia Elvira Dos Santos; Freire, Maryelly Evelly Araújo; Souza, Robson Arruda; Damasceno, Hélio Anderson Melo; Raposo, Maria Cristina Falcão

    2017-01-01

    To assess the static and dynamic balance performance of students with normal hearing and with sensorineural hearing loss. A cross-sectional study assessing 96 students, 48 with normal hearing and 48 with sensorineural hearing loss of both sexes, aged 7 and 18 years. To evaluate static balance, Romberg, Romberg-Barré and Fournier tests were used; and for the dynamic balance, we applied the Unterberger test. Hearing loss students showed more changes in static and dynamic balance as compared to normal hearing, in all tests used (p<0.001). The same difference was found when subjects were grouped by sex. For females, Romberg, Romberg-Barré, Fournier and Unterberger test p values were, respectively, p=0.004, p<0.001, p<0.001 and p=0.023; for males, the p values were p=0.009, p<0.001, p<0.001 and p=0.002, respectively. The same difference was observed when students were classified by age. For 7 to 10 years old students, the p values for Romberg, Romberg-Barré and Fournier tests were, respectively, p=0.007, p<0.001 and p=0.001; for those aged 11 and 14 years, the p values for Romberg, Romberg-Barré, Fournier and Unterberger tests were p=0.002, p<0.001, p<0.001 and p=0.015, respectively; and for those aged 15 and 18 years, the p values for Romberg-Barré, Fournier and Unterberger tests were, respectively, p=0.037, p<0.001 and p=0.037. Hearing-loss students showed more changes in static and dynamic balance comparing to normal hearing of same sex and age groups.

  1. Reading instead of reasoning? Predictors of arithmetic skills in children with cochlear implants.

    PubMed

    Huber, Maria; Kipman, Ulrike; Pletzer, Belinda

    2014-07-01

    The aim of the present study was to evaluate whether the arithmetic achievement of children with cochlear implants (CI) was lower or comparable to that of their normal hearing peers and to identify predictors of arithmetic achievement in children with CI. In particular we related the arithmetic achievement of children with CI to nonverbal IQ, reading skills and hearing variables. 23 children with CI (onset of hearing loss in the first 24 months, cochlear implantation in the first 60 months of life, atleast 3 years of hearing experience with the first CI) and 23 normal hearing peers matched by age, gender, and social background participated in this case control study. All attended grades two to four in primary schools. To assess their arithmetic achievement, all children completed the "Arithmetic Operations" part of the "Heidelberger Rechentest" (HRT), a German arithmetic test. To assess reading skills and nonverbal intelligence as potential predictors of arithmetic achievement, all children completed the "Salzburger Lesetest" (SLS), a German reading screening, and the Culture Fair Intelligence Test (CFIT), a nonverbal intelligence test. Children with CI did not differ significantly from hearing children in their arithmetic achievement. Correlation and regression analyses revealed that in children with CI, arithmetic achievement was significantly (positively) related to reading skills, but not to nonverbal IQ. Reading skills and nonverbal IQ were not related to each other. In normal hearing children, arithmetic achievement was significantly (positively) related to nonverbal IQ, but not to reading skills. Reading skills and nonverbal IQ were positively correlated. Hearing variables were not related to arithmetic achievement. Children with CI do not show lower performance in non-verbal arithmetic tasks, compared to normal hearing peers. Copyright © 2014. Published by Elsevier Ireland Ltd.

  2. Intelligibility of foreign-accented speech: Effects of listening condition, listener age, and listener hearing status

    NASA Astrophysics Data System (ADS)

    Ferguson, Sarah Hargus

    2005-09-01

    It is well known that, for listeners with normal hearing, speech produced by non-native speakers of the listener's first language is less intelligible than speech produced by native speakers. Intelligibility is well correlated with listener's ratings of talker comprehensibility and accentedness, which have been shown to be related to several talker factors, including age of second language acquisition and level of similarity between the talker's native and second language phoneme inventories. Relatively few studies have focused on factors extrinsic to the talker. The current project explored the effects of listener and environmental factors on the intelligibility of foreign-accented speech. Specifically, monosyllabic English words previously recorded from two talkers, one a native speaker of American English and the other a native speaker of Spanish, were presented to three groups of listeners (young listeners with normal hearing, elderly listeners with normal hearing, and elderly listeners with hearing impairment; n=20 each) in three different listening conditions (undistorted words in quiet, undistorted words in 12-talker babble, and filtered words in quiet). Data analysis will focus on interactions between talker accent, listener age, listener hearing status, and listening condition. [Project supported by American Speech-Language-Hearing Association AARC Award.

  3. Masking Release in Children and Adults with Hearing Loss When Using Amplification

    ERIC Educational Resources Information Center

    Brennan, Marc; McCreery, Ryan; Kopun, Judy; Lewis, Dawna; Alexander, Joshua; Stelmachowicz, Patricia

    2016-01-01

    Purpose: This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. Method: Sentence recognition in unmodulated noise was compared with recognition…

  4. Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing

    PubMed Central

    Dietz, Mathias; Hohmann, Volker; Jürgens, Tim

    2015-01-01

    For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types. PMID:26721918

  5. Nurses with Undiagnosed Hearing Loss: Implications for Practice.

    PubMed

    Spencer, Cara S; Pennington, Karen

    2015-01-05

    Hearing loss affects 36 million people in the United States of America, including 17% of the adult population. This suggests some nurses will have hearing losses that affect their communication skills and their ability to perform auscultation assessments, potentially compromising patient care and safety. In this article, the authors begin by reviewing the hearing process, describing various types of hearing loss, and discussing noise-induced hearing loss and noise levels in hospitals. Next, they consider the role of hearing in nursing practice, review resources for hearing-impaired nurses, identify the many costs associated with untreated hearing loss, and note nurses' responsibility for maintaining their hearing health. The authors conclude that nurses need to be aware of their risk for hearing loss and have their hearing screened every five years.

  6. 30 CFR 227.105 - What are the hearing procedures?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Section 227.105 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT DELEGATION TO STATES Hearing Process § 227.105 What are the hearing procedures? After MMS notifies...; (h) MMS will maintain a record of all documents related to the proposal process; (i) After the...

  7. Evaluation of Extended-Wear Hearing Aid Technology for Operational Military Use

    DTIC Science & Technology

    2016-07-01

    listeners without degrading auditory situational awareness. To this point, significant progress has been made in this evaluation process. The devices...provide long-term hearing protection for listeners with normal hearing with minimal impact on auditory situational awareness and minimal annoyance due to...Test Plan: A comprehensive test plan is complete for the measurements at AFRL, which will incorporate goals 1-2 and 4-5 above using a normal

  8. Speech-on-speech masking with variable access to the linguistic content of the masker speech for native and nonnative english speakers.

    PubMed

    Calandruccio, Lauren; Bradlow, Ann R; Dhar, Sumitrajit

    2014-04-01

    Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared with native-accented English speech was reported in Calandruccio et al (2010a). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech Affiliationect masking release. A mixed-model design with within-subject (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech and high-intelligibility, moderate-intelligibility, and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Three listener groups were tested, including monolingual English speakers with normal hearing, nonnative English speakers with normal hearing, and monolingual English speakers with hearing loss. The nonnative English speakers were from various native language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetric mild sloping to moderate sensorineural hearing loss. Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the key words within the sentences (100 key words per masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and listener groups. Monolingual English speakers with normal hearing benefited when the competing speech signal was foreign accented compared with native accented, allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the nonnative English-speaking listeners with normal hearing nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. Slight modifications between the target and the masker speech allowed monolingual English speakers with normal hearing to improve their recognition of native-accented English, even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future. American Academy of Audiology.

  9. Children with minimal sensorineural hearing loss: prevalence, educational performance, and functional status.

    PubMed

    Bess, F H; Dodd-Murphy, J; Parker, R A

    1998-10-01

    This study was designed to determine the prevalence of minimal sensorineural hearing loss (MSHL) in school-age children and to assess the relationship of MSHL to educational performance and functional status. To determine prevalence, a single-staged sampling frame of all schools in the district was created for 3rd, 6th, and 9th grades. Schools were selected with probability proportional to size in each grade group. The final study sample was 1218 children. To assess the association of MSHL with educational performance, children identified with MSHL were assigned as cases into a subsequent case-control study. Scores of the Comprehensive Test of Basic Skills (4th Edition) (CTBS/4) then were compared between children with MSHL and children with normal hearing. School teachers completed the Screening Instrument for Targeting Education Risk (SIFTER) and the Revised Behavior Problem Checklist for a subsample of children with MSHL and their normally hearing counterparts. Finally, data on grade retention for a sample of children with MSHL were obtained from school records and compared with school district norm data. To assess the relationship between MSHL and functional status, test scores of all children with MSHL and all children with normal hearing in grades 6 and 9 were compared on the COOP Adolescent Chart Method (COOP), a screening tool for functional status. MSHL was exhibited by 5.4% of the study sample. The prevalence of all types of hearing impairment was 11.3%. Third grade children with MSHL exhibited significantly lower scores than normally hearing controls on a series of subtests of the CTBS/4; however, no differences were noted at the 6th and 9th grade levels. The SIFTER results revealed that children with MSHL scored poorer on the communication subtest than normal-hearing controls. Thirty-seven percent of the children with MSHL failed at least one grade. Finally, children with MSHL exhibited significantly greater dysfunction than children with normal hearing on several subtests of the COOP including behavior, energy, stress, social support, and self-esteem. The prevalence of hearing loss in the schools almost doubles when children with MSHL are included. This large, education-based study shows clinically important associations between MSHL and school behavior and performance. Children with MSHL experienced more difficulty than normally hearing children on a series of educational and functional test measures. Although additional research is necessary, results suggest the need for audiologists, speech-language pathologists, and educators to evaluate carefully our identification and management approaches with this population. Better efforts to manage these children could result in meaningful improvement in their educational progress and psychosocial well-being.

  10. Comparison of Social Interaction between Cochlear-Implanted Children with Normal Intelligence Undergoing Auditory Verbal Therapy and Normal-Hearing Children: A Pilot Study.

    PubMed

    Monshizadeh, Leila; Vameghi, Roshanak; Sajedi, Firoozeh; Yadegari, Fariba; Hashemi, Seyed Basir; Kirchem, Petra; Kasbi, Fatemeh

    2018-04-01

    A cochlear implant is a device that helps hearing-impaired children by transmitting sound signals to the brain and helping them improve their speech, language, and social interaction. Although various studies have investigated the different aspects of speech perception and language acquisition in cochlear-implanted children, little is known about their social skills, particularly Persian-speaking cochlear-implanted children. Considering the growing number of cochlear implants being performed in Iran and the increasing importance of developing near-normal social skills as one of the ultimate goals of cochlear implantation, this study was performed to compare the social interaction between Iranian cochlear-implanted children who have undergone rehabilitation (auditory verbal therapy) after surgery and normal-hearing children. This descriptive-analytical study compared the social interaction level of 30 children with normal hearing and 30 with cochlear implants who were conveniently selected. The Raven test was administered to the both groups to ensure normal intelligence quotient. The social interaction status of both groups was evaluated using the Vineland Adaptive Behavior Scale, and statistical analysis was performed using Statistical Package for Social Sciences (SPSS) version 21. After controlling age as a covariate variable, no significant difference was observed between the social interaction scores of both the groups (p > 0.05). In addition, social interaction had no correlation with sex in either group. Cochlear implantation followed by auditory verbal rehabilitation helps children with sensorineural hearing loss to have normal social interactions, regardless of their sex.

  11. The missing link in language development of deaf and hard of hearing children: pragmatic language development.

    PubMed

    Goberis, Dianne; Beams, Dinah; Dalpes, Molly; Abrisch, Amanda; Baca, Rosalinda; Yoshinaga-Itano, Christine

    2012-11-01

    This article will provide information about the Pragmatics Checklist, which consists of 45 items and is scored as: (1) not present, (2) present but preverbal, (3) present with one to three words, and (4) present with complex language. Information for both children who are deaf or hard of hearing and those with normal hearing are presented. Children who are deaf or hard of hearing are significantly older when demonstrating skill with complex language than their normal hearing peers. In general, even at the age of 7 years, there are several items that are not mastered by 75% of the deaf or hard of hearing children. Additionally, the article will provide some suggestions of strategies that can be considered as a means to facilitate the development of these pragmatic language skills for children who are deaf or hard of hearing. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  12. Stability of the Medial Olivocochlear Reflex as Measured by Distortion Product Otoacoustic Emissions

    ERIC Educational Resources Information Center

    Mishra, Srikanta K.; Abdala, Carolina

    2015-01-01

    Purpose: The purpose of this study was to assess the repeatability of a fine-resolution, distortion product otoacoustic emission (DPOAE)-based assay of the medial olivocochlear (MOC) reflex in normal-hearing adults. Method: Data were collected during 36 test sessions from 4 normal-hearing adults to assess short-term stability and 5 normal-hearing…

  13. Thin and open vessel windows for intra-vital fluorescence imaging of murine cochlear blood flow.

    PubMed

    Shi, Xiaorui; Zhang, Fei; Urdang, Zachary; Dai, Min; Neng, Lingling; Zhang, Jinhui; Chen, Songlin; Ramamoorthy, Sripriya; Nuttall, Alfred L

    2014-07-01

    Normal microvessel structure and function in the cochlea is essential for maintaining the ionic and metabolic homeostasis required for hearing function. Abnormal cochlear microcirculation has long been considered an etiologic factor in hearing disorders. A better understanding of cochlear blood flow (CoBF) will enable more effective amelioration of hearing disorders that result from aberrant blood flow. However, establishing the direct relationship between CoBF and other cellular events in the lateral wall and response to physio-pathological stress remains a challenge due to the lack of feasible interrogation methods and difficulty in accessing the inner ear. Here we report on new methods for studying the CoBF in a mouse model using a thin or open vessel-window in combination with fluorescence intra-vital microscopy (IVM). An open vessel-window enables investigation of vascular cell biology and blood flow permeability, including pericyte (PC) contractility, bone marrow cell migration, and endothelial barrier leakage, in wild type and fluorescent protein-labeled transgenic mouse models with high spatial and temporal resolution. Alternatively, the thin vessel-window method minimizes disruption of the homeostatic balance in the lateral wall and enables study CoBF under relatively intact physiological conditions. A thin vessel-window method can also be used for time-based studies of physiological and pathological processes. Although the small size of the mouse cochlea makes surgery difficult, the methods are sufficiently developed for studying the structural and functional changes in CoBF under normal and pathological conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Speech Intelligibility and Prosody Production in Children with Cochlear Implants

    PubMed Central

    Chin, Steven B.; Bergeson, Tonya R.; Phan, Jennifer

    2012-01-01

    Objectives The purpose of the current study was to examine the relation between speech intelligibility and prosody production in children who use cochlear implants. Methods The Beginner's Intelligibility Test (BIT) and Prosodic Utterance Production (PUP) task were administered to 15 children who use cochlear implants and 10 children with normal hearing. Adult listeners with normal hearing judged the intelligibility of the words in the BIT sentences, identified the PUP sentences as one of four grammatical or emotional moods (i.e., declarative, interrogative, happy, or sad), and rated the PUP sentences according to how well they thought the child conveyed the designated mood. Results Percent correct scores were higher for intelligibility than for prosody and higher for children with normal hearing than for children with cochlear implants. Declarative sentences were most readily identified and received the highest ratings by adult listeners; interrogative sentences were least readily identified and received the lowest ratings. Correlations between intelligibility and all mood identification and rating scores except declarative were not significant. Discussion The findings suggest that the development of speech intelligibility progresses ahead of prosody in both children with cochlear implants and children with normal hearing; however, children with normal hearing still perform better than children with cochlear implants on measures of intelligibility and prosody even after accounting for hearing age. Problems with interrogative intonation may be related to more general restrictions on rising intonation, and the correlation results indicate that intelligibility and sentence intonation may be relatively dissociated at these ages. PMID:22717120

  15. Free Field Word recognition test in the presence of noise in normal hearing adults.

    PubMed

    Almeida, Gleide Viviani Maciel; Ribas, Angela; Calleros, Jorge

    In ideal listening situations, subjects with normal hearing can easily understand speech, as can many subjects who have a hearing loss. To present the validation of the Word Recognition Test in a Free Field in the Presence of Noise in normal-hearing adults. Sample consisted of 100 healthy adults over 18 years of age with normal hearing. After pure tone audiometry, a speech recognition test was applied in free field condition with monosyllables and disyllables, with standardized material in three listening situations: optimal listening condition (no noise), with a signal to noise ratio of 0dB and a signal to noise ratio of -10dB. For these tests, an environment in calibrated free field was arranged where speech was presented to the subject being tested from two speakers located at 45°, and noise from a third speaker, located at 180°. All participants had speech audiometry results in the free field between 88% and 100% in the three listening situations. Word Recognition Test in Free Field in the Presence of Noise proved to be easy to be organized and applied. The results of the test validation suggest that individuals with normal hearing should get between 88% and 100% of the stimuli correct. The test can be an important tool in measuring noise interference on the speech perception abilities. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  16. Effects of sensorineural hearing loss on visually guided attention in a multitalker environment.

    PubMed

    Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G

    2009-03-01

    This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.

  17. [Electronic noise compensation for improving speech discrimination in airplane pilots].

    PubMed

    Matschke, R G; Pösselt, C; Veit, I; Andresen, U

    1989-02-01

    Noise exposure measurements were performed in pilots of the Federal Navy during realistic flight situations. The ambient noise levels during regular flight service were maintained at levels nearly all the time above 90 dB. To avoid occupational hearing loss, the "Noise Injury Prevention Code" issued by the insurers would demand wearing personal ear protection, e.g. ear plugs. But such equipment in the aircraft cockpit would have precisely the opposite effect, because one of the reasons for possible damage to hearing is radio communication. To be able to understand radio traffic in spite of the noisy environment, headphone volume must be raised above the noise of the engines. The use of ear plugs can be of only limited value. Whereas pilots with normal hearing show only little impairment of speech intelligibility, those with noise-induced hearing loss show substantial impairment that varies in proportion to their hearing loss. Communication abilities may be drastically reduced which may compromise the reliability of radio traffic. Cockpit noise has its maximum intensity around 125 Hz and flight helmets and ear defenders are not very effective in low frequency ranges. Sennheiser electronic KG developed an active noise compensation circuit, which makes use of the "anti noise" principle. Here the outside noises picked up by two microphones integrated into the headset are processed electronically in such a way that they largely neutralise the original noise. It had to be made sure that the radio traffic signal was not also compensated and that the signal to noise ratio was clearly increased.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Influences of Working Memory and Audibility on Word Learning in Children with Hearing Loss

    ERIC Educational Resources Information Center

    Stiles, Derek Jason

    2010-01-01

    As a group, children with hearing loss demonstrate delays in language development relative to their peers with normal hearing. Early intervention has a profound impact on language outcomes in children with hearing loss. Data examining the relationship between degree of hearing loss and language outcomes are variable. Two approaches are used in the…

  19. Audiovisual sentence repetition as a clinical criterion for auditory development in Persian-language children with hearing loss.

    PubMed

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Rahimi, Zahra; Mayahi, Anis

    2017-02-01

    It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P < 0.01), cochlear implant (P < 0.01), and hearing aid (P < 0.01). In addition, there was no significant correlationship between the visual-only and audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing. Therefore, it is recommended that audiovisual sentence repetition should be used as a clinical criterion for auditory development in Persian-language children with hearing loss. Copyright © 2016. Published by Elsevier B.V.

  20. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users.

    PubMed

    Jaekel, Brittany N; Newman, Rochelle S; Goupell, Matthew J

    2017-05-24

    Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing. CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information. CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal.

  1. The Sensitivity of Adolescent Hearing Screens Significantly Improves by Adding High Frequencies.

    PubMed

    Sekhar, Deepa L; Zalewski, Thomas R; Beiler, Jessica S; Czarnecki, Beth; Barr, Ashley L; King, Tonya S; Paul, Ian M

    2016-09-01

    One in 6 US adolescents has high-frequency hearing loss, often related to hazardous noise. Yet, the American Academy of Pediatrics (AAP) hearing screen (500, 1,000, 2,000, 4,000 Hertz) primarily includes low frequencies (<3,000 Hertz). Study objectives were to determine (1) sensitivity and specificity of the AAP hearing screen for adolescent hearing loss and (2) if adding high frequencies increases sensitivity, while repeat screening of initial referrals reduces false positive results (maintaining acceptable specificity). Eleventh graders (n = 134) participated in hearing screening (2013-2014) including "gold-standard" sound-treated booth testing to calculate sensitivity and specificity. Of the 43 referrals, 27 (63%) had high-frequency hearing loss. AAP screen sensitivity and specificity were 58.1% (95% confidence interval 42.1%-73.0%) and 91.2% (95% confidence interval 83.4-96.1), respectively. Adding high frequencies (6,000, 8,000 Hertz) significantly increased sensitivity to 79.1% (64.0%-90.0%; p = .003). Specificity with repeat screening was 81.3% (71.8%-88.7%; p = .003). Adolescent hearing screen sensitivity improves with high frequencies. Repeat testing maintains acceptable specificity. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  2. Positive Experiences and Life Aspirations among Adolescents with and without Hearing Impairments.

    ERIC Educational Resources Information Center

    Magen, Zipora

    1990-01-01

    Comparison of 79 normally hearing and 42 hearing-impaired adolescents found no differences regarding the intensity of their remembered positive experiences. Hearing-impaired subjects reported more positive interpersonal experiences, rarely experienced positive experiences "with self," and showed less desire for transpersonal commitment,…

  3. 38 CFR 17.149 - Sensori-neural aids.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... attendance or by reason of being permanently housebound; (6) Those who have a visual or hearing impairment... normally occurring visual or hearing impairments; and (8) Those visually or hearing impaired so severely... frequency ranges which contribute to a loss of communication ability; however, hearing aids are to be...

  4. Auditory maturation in premature infants: a potential pitfall for early cochlear implantation.

    PubMed

    Hof, Janny R; Stokroos, Robert J; Wix, Eduard; Chenault, Mickey; Gelders, Els; Brokx, Jan

    2013-08-01

    To describe spontaneous hearing improvement in the first years of life of a number of preterm neonates relative to cochlear implant candidacy. Retrospective case study. Hearing levels of 14 preterm neonates (mean gestational age at birth = 29 weeks) referred after newborn hearing screening were evaluated. Initial hearing thresholds ranged from 40 to 105 dBHL (mean = 85 dBHL). Hearing level improved to normal levels for four neonates and to moderate levels for five, whereas for five neonates, no improvement in hearing thresholds was observed and cochlear implantation was recommended. Three of the four neonates in whom the hearing improved to normal levels were born prior to 28 weeks gestational age. Hearing improvement was mainly observed prior to a gestational age of 80 weeks. Delayed maturation of an immature auditory pathway might be an important reason for referral after newborn hearing screening in premature infants. Caution is advised regarding early cochlear implantation in preterm born infants. Audiological follow-ups until at least 80 weeks gestational age are therefore recommended. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  5. 48 CFR 6101.21 - Hearing procedures [Rule 21].

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Nature and conduct of hearings. (1) Except when necessary to maintain the confidentiality of protected... subpoena pursuant to 6101.16(h) (Rule 16(h)). (h) Issues not raised by pleadings. If evidence is objected to at a hearing on the ground that it is not within the issues raised by the pleadings, it may...

  6. 29 CFR 101.20 - Formal hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... proceeding. (c) The hearing, usually open to the public, is held before a hearing officer who normally is an attorney or field examiner attached to the Regional Office but may be another qualified Agency official...

  7. The Effects of Acoustic Bandwidth on Simulated Bimodal Benefit in Children and Adults with Normal Hearing.

    PubMed

    Sheffield, Sterling W; Simha, Michelle; Jahn, Kelly N; Gifford, René H

    2016-01-01

    The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (<250 Hz) in the nonimplanted ear and increasing benefit with broader bandwidth. Knowledge of the effect of acoustic bandwidth on bimodal benefit in children may help direct clinical decisions regarding a second CI, continued bimodal hearing, and even optimizing acoustic amplification for the nonimplanted ear.

  8. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology.

    PubMed

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2013-01-02

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.

  9. Audio-visual integration during speech perception in prelingually deafened Japanese children revealed by the McGurk effect.

    PubMed

    Tona, Risa; Naito, Yasushi; Moroto, Saburo; Yamamoto, Rinko; Fujiwara, Keizo; Yamazaki, Hiroshi; Shinohara, Shogo; Kikuchi, Masahiro

    2015-12-01

    To investigate the McGurk effect in profoundly deafened Japanese children with cochlear implants (CI) and in normal-hearing children. This was done to identify how children with profound deafness using CI established audiovisual integration during the speech acquisition period. Twenty-four prelingually deafened children with CI and 12 age-matched normal-hearing children participated in this study. Responses to audiovisual stimuli were compared between deafened and normal-hearing controls. Additionally, responses of the children with CI younger than 6 years of age were compared with those of the children with CI at least 6 years of age at the time of the test. Responses to stimuli combining auditory labials and visual non-labials were significantly different between deafened children with CI and normal-hearing controls (p<0.05). Additionally, the McGurk effect tended to be more induced in deafened children older than 6 years of age than in their younger counterparts. The McGurk effect was more significantly induced in prelingually deafened Japanese children with CI than in normal-hearing, age-matched Japanese children. Despite having good speech-perception skills and auditory input through their CI, from early childhood, deafened children may use more visual information in speech perception than normal-hearing children. As children using CI need to communicate based on insufficient speech signals coded by CI, additional activities of higher-order brain function may be necessary to compensate for the incomplete auditory input. This study provided information on the influence of deafness on the development of audiovisual integration related to speech, which could contribute to our further understanding of the strategies used in spoken language communication by prelingually deafened children. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  11. Travels of "Sound"

    ERIC Educational Resources Information Center

    Raghuraman, Renuka Sundaram

    2009-01-01

    Some children are born with a hearing loss. Other children, initially, hear normally, but progressively lose their hearing over time. Other reasons for hearing loss include illness, accidents, genes, trauma, or, simply, a fluke of nature. With the right tools and optimal intervention, most children adapt well and lead active lives just like anyone…

  12. The Concept of Fractional Number among Hearing-Impaired Students.

    ERIC Educational Resources Information Center

    Titus, Janet C.

    This study investigated hearing-impaired students' understanding of the mathematical concept of fractional numbers, as measured by their ability to determine the order and equivalence of fractional numbers. Twenty-one students (ages 10-16) with hearing impairments were compared with 26 students with normal hearing. The study concluded that…

  13. Development of Joint Engagement in Young Deaf and Hearing Children: Effects of Chronological Age and Language Skills

    ERIC Educational Resources Information Center

    Cejas, Ivette; Barker, David H.; Quittner, Alexandra L.; Niparko, John K.

    2014-01-01

    Purpose: To evaluate joint engagement (JE) in age-matched children with and without hearing and its relationship to oral language skills. Method: Participants were 180 children with severe-to-profound hearing loss prior to cochlear implant surgery, and 96 age-matched children with normal hearing; all parents were hearing. JE was evaluated in a…

  14. Sentence intelligibility during segmental interruption and masking by speech-modulated noise: Effects of age and hearing loss

    PubMed Central

    Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.

    2015-01-01

    This study investigated how single-talker modulated noise impacts consonant and vowel cues to sentence intelligibility. Younger normal-hearing, older normal-hearing, and older hearing-impaired listeners completed speech recognition tests. All listeners received spectrally shaped speech matched to their individual audiometric thresholds to ensure sufficient audibility with the exception of a second younger listener group who received spectral shaping that matched the mean audiogram of the hearing-impaired listeners. Results demonstrated minimal declines in intelligibility for older listeners with normal hearing and more evident declines for older hearing-impaired listeners, possibly related to impaired temporal processing. A correlational analysis suggests a common underlying ability to process information during vowels that is predictive of speech-in-modulated noise abilities. Whereas, the ability to use consonant cues appears specific to the particular characteristics of the noise and interruption. Performance declines for older listeners were mostly confined to consonant conditions. Spectral shaping accounted for the primary contributions of audibility. However, comparison with the young spectral controls who received identical spectral shaping suggests that this procedure may reduce wideband temporal modulation cues due to frequency-specific amplification that affected high-frequency consonants more than low-frequency vowels. These spectral changes may impact speech intelligibility in certain modulation masking conditions. PMID:26093436

  15. Development and validation of a smartphone-based digits-in-noise hearing test in South African English.

    PubMed

    Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Hopper, Thomas Christopher; Smits, Cas

    2015-07-01

    The objective of this study was to develop and validate a smartphone-based digits-in-noise hearing test for South African English. Single digits (0-9) were recorded and spoken by a first language English female speaker. Level corrections were applied to create a set of homogeneous digits with steep speech recognition functions. A smartphone application was created to utilize 120 digit-triplets in noise as test material. An adaptive test procedure determined the speech reception threshold (SRT). Experiments were performed to determine headphones effects on the SRT and to establish normative data. Participants consisted of 40 normal-hearing subjects with thresholds ≤15 dB across the frequency spectrum (250-8000 Hz) and 186 subjects with normal-hearing in both ears, or normal-hearing in the better ear. The results show steep speech recognition functions with a slope of 20%/dB for digit-triplets presented in noise using the smartphone application. The results of five headphone types indicate that the smartphone-based hearing test is reliable and can be conducted using standard Android smartphone headphones or clinical headphones. A digits-in-noise hearing test was developed and validated for South Africa. The mean SRT and speech recognition functions correspond to previous developed telephone-based digits-in-noise tests.

  16. Speech intelligibility with helicopter noise: tests of three helmet-mounted communication systems.

    PubMed

    Ribera, John E; Mozo, Ben T; Murphy, Barbara A

    2004-02-01

    Military aviator helmet communications systems are designed to enhance speech intelligibility (SI) in background noise and reduce exposure to harmful levels of noise. Some aviators, over the course of their aviation career, develop noise-induced hearing loss that may affect their ability to perform required tasks. New technology can improve SI in noise for aviators with normal hearing as well as those with hearing loss. SI in noise scores were obtained from 40 rotary-wing aviators (20 with normal hearing and 20 with hearing-loss waivers). There were three communications systems evaluated: a standard SPH-4B, an SPH-4B aviator helmet modified with communications earplug (CEP), and an SPH-4B modified with active noise reduction (ANR). Subjects' SI was better in noise with newer technologies than with the standard issue aviator helmet. A significant number of aviators on waivers for hearing loss performed within the range of their normal hearing counterparts when wearing the newer technology. The rank order of perceived speech clarity was 1) CEP, 2) ANR, and 3) unmodified SPH-4B. To insure optimum SI in noise for rotary-wing aviators, consideration should be given to retrofitting existing aviator helmets with new technology, and incorporating such advances in communication systems of the future. Review of standards for determining fitness to fly is needed.

  17. Longitudinal Development of Distortion Product Otoacoustic Emissions in Infants With Normal Hearing.

    PubMed

    Hunter, Lisa L; Blankenship, Chelsea M; Keefe, Douglas H; Feeney, M Patrick; Brown, David K; McCune, Annie; Fitzpatrick, Denis F; Lin, Li

    2018-01-23

    The purpose of this study was to describe normal characteristics of distortion product otoacoustic emission (DPOAE) signal and noise level in a group of newborns and infants with normal hearing followed longitudinally from birth to 15 months of age. This is a prospective, longitudinal study of 231 infants who passed newborn hearing screening and were verified to have normal hearing. Infants were enrolled from a well-baby nursery and two neonatal intensive care units (NICUs) in Cincinnati, OH. Normal hearing was confirmed with threshold auditory brainstem response and visual reinforcement audiometry. DPOAEs were measured in up to four study visits over the first year after birth. Stimulus frequencies f1 and f2 were used with f2/f1 = 1.22, and the DPOAE was recorded at frequency 2f1-f2. A longitudinal repeated-measure linear mixed model design was used to study changes in DPOAE level and noise level as related to age, middle ear transfer, race, and NICU history. Significant changes in the DPOAE and noise levels occurred from birth to 12 months of age. DPOAE levels were the highest at 1 month of age. The largest decrease in DPOAE level occurred between 1 and 5 months of age in the mid to high frequencies (2 to 8 kHz) with minimal changes occurring between 6, 9, and 12 months of age. The decrease in DPOAE level was significantly related to a decrease in wideband absorbance at the same f2 frequencies. DPOAE noise level increased only slightly with age over the first year with the highest noise levels in the 12-month-old age range. Minor, nonsystematic effects for NICU history, race, and gestational age at birth were found, thus these results were generalizable to commonly seen clinical populations. DPOAE levels were related to wideband middle ear absorbance changes in this large sample of infants confirmed to have normal hearing at auditory brainstem response and visual reinforcement audiometry testing. This normative database can be used to evaluate clinical results from birth to 1 year of age. The distributions of DPOAE level and signal to noise ratio data reported herein across frequency and age in normal-hearing infants who were healthy or had NICU histories may be helpful to detect the presence of hearing loss in infants.

  18. Individual Differences Reveal Correlates of Hidden Hearing Deficits

    PubMed Central

    Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G.

    2015-01-01

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.” PMID:25653371

  19. Speech-on-speech masking with variable access to the linguistic content of the masker speech for native and non-native speakers of English

    PubMed Central

    Calandruccio, Lauren; Bradlow, Ann R.; Dhar, Sumitrajit

    2013-01-01

    Background Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared to native-accented English speech was reported in Calandruccio, Dhar and Bradlow (2010). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. Purpose The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech affect masking release. Research Design A mixed model design with within- (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech, and high-intelligibility, moderate-intelligibility and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Study Sample Three listener groups were tested including monolingual English speakers with normal hearing, non-native speakers of English with normal hearing, and monolingual speakers of English with hearing loss. The non-native speakers of English were from various native-language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetrical, mild sloping to moderate sensorineural hearing loss. Data Collection and Analysis Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the keywords within the sentences (100 keywords/masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and the listener groups. Results Monolingual speakers of English with normal hearing benefited when the competing speech signal was foreign-accented compared to native-accented allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the non-native English listeners with normal hearing, nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. Conclusions Slight modifications between the target and the masker speech allowed monolingual speakers of English with normal hearing to improve their recognition of native-accented English even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker, or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future. PMID:25126683

  20. Effects of Age and Working Memory Capacity on Speech Recognition Performance in Noise Among Listeners With Normal Hearing.

    PubMed

    Gordon-Salant, Sandra; Cole, Stacey Samuels

    2016-01-01

    This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.

  1. Neurotrophins and electrical stimulation for protection and repair of spiral ganglion neurons following sensorineural hearing loss

    PubMed Central

    Shepherd, Robert K.; Coco, Anne; Epp, Stephanie B.

    2008-01-01

    Exogenous neurotrophins (NTs) have been shown to rescue spiral ganglion neurons (SGNs) from degeneration following a sensorineural hearing loss (SNHL). Furthermore, chronic electrical stimulation (ES) has been shown to retard SGN degeneration in some studies but not others. Since there is evidence of even greater SGN rescue when NT administration is combined with ES, we examined whether chronic ES can maintain SGN survival long after cessation of NT delivery. Young adult guinea pigs were profoundly deafened using ototoxic drugs; five days later they were unilaterally implanted with an electrode array and drug delivery system. Brain derived neurotrophic factor (BDNF) was continuously delivered to the scala tympani over a four week period while the animal simultaneously received ES via bipolar electrodes in the basal turn (i.e. turn 1) scala tympani. One cohort (n=5) received ES for six weeks (i.e. including a two week period after the cessation of BDNF delivery; ES6); a second cohort (n=5) received ES for 10 weeks (i.e. a six week period following cessation of BDNF delivery; ES10). The cochleae were harvested for histology and SGN density determined for each cochlear turn for comparison with normal hearing controls (n=4). The withdrawal of BDNF resulted in a rapid loss of SGNs in turns 2–4 of the deafened/BDNF-treated cochleae; this was significant as early as two weeks following removal of the NT when compared with normal controls (p<0.05). Importantly, there was not a significant reduction in SGNs in turn 1 (i.e. adjacent to the electrode array) two and six weeks after NT removal, as compared with normal controls. This result suggests that chronic ES can prevent the rapid loss of SGNs that occurs after the withdrawal of exogenous NTs. Implications for the clinical delivery of NTs are discussed. PMID:18243608

  2. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility

    PubMed Central

    Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise. PMID:25566159

  3. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility.

    PubMed

    Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.

  4. Audibility of reverse alarms under hearing protectors for normal and hearing-impaired listeners.

    PubMed

    Robinson, G S; Casali, J G

    1995-11-01

    The question of whether or not an individual suffering from a hearing loss is capable of hearing an auditory alarm or warning is an extremely important industrial safety issue. The ISO Standard that addresses auditory warnings for workplaces requires that any auditory alarm or warning be audible to all individuals in the workplace including those suffering from a hearing loss and/or wearing hearing protection devices (HPDs). Research was undertaken to determine how the ability to detect an alarm or warning signal changed for individuals with normal hearing and two levels of hearing loss as the levels of masking noise and alarm were manipulated. Pink noise was used as the masker and a heavy-equipment reverse alarm was used as the signal. The rating method paradigm of signal detection theory was used as the experimental procedure to separate the subjects' absolute sensitivities to the alarm from their individual criteria for deciding to respond in an affirmative manner. Results indicated that even at a fairly low signal-to-noise ratio (0 dB), subjects with a substantial hearing loss [a pure-tone average (PTA) hearing level of 45-50 dBHL in both ears] were capable of hearing the reverse alarm while wearing a high-attenuation earmuff in the pink noise used in the study.

  5. Development of a test of suprathreshold acuity in noise in Brazilian Portuguese: a new method for hearing screening and surveillance.

    PubMed

    Vaez, Nara; Desgualdo-Pereira, Liliane; Paglialonga, Alessia

    2014-01-01

    This paper describes the development of a speech-in-noise test for hearing screening and surveillance in Brazilian Portuguese based on the evaluation of suprathreshold acuity performances. The SUN test (Speech Understanding in Noise) consists of a list of intervocalic consonants in noise presented in a multiple-choice paradigm by means of a touch screen. The test provides one out of three possible results: "a hearing check is recommended" (red light), "a hearing check would be advisable" (yellow light), and "no hearing difficulties" (green light) (Paglialonga et al., Comput. Biol. Med. 2014). This novel test was developed in a population of 30 normal hearing young adults and 101 adults with varying degrees of hearing impairment and handicap, including normal hearing. The test had 84% sensitivity and 76% specificity compared to conventional pure-tone screening and 83% sensitivity and 86% specificity to detect disabling hearing impairment. The test outcomes were in line with the degree of self-perceived hearing handicap. The results found here paralleled those reported in the literature for the SUN test and for conventional speech-in-noise measures. This study showed that the proposed test might be a viable method to identify individuals with hearing problems to be referred to further audiological assessment and intervention.

  6. Building and Maintaining Connection: Supporting Transition in a Rural State

    ERIC Educational Resources Information Center

    Flannery, Ann; Mason, Paula; Dunagan, Janna

    2016-01-01

    Deaf and hard of hearing students at Rocky Mountain High School (RMHS), a public school in Meridian, Idaho--and other deaf and hard of hearing students throughout the state, needed skills for the workplace. The demand was critical, and administrators knew change was needed. Co-authors Janna Dunagan, who teaches deaf and hard of hearing students,…

  7. 78 FR 38872 - American Jobs Creation Act Modifications to Section 6708, Failure To Maintain List of Advisees...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-28

    ... 24, 2013, no one has requested to speak. Therefore, the public hearing scheduled for July 2, 2013, is.... ACTION: Cancellation of notice of public hearing on proposed rulemaking. SUMMARY: This document cancels a public hearing on proposed regulations relating to the penalty under section 6708 of the Internal Revenue...

  8. The use of fundamental frequency for lexical segmentation in listeners with cochlear implants.

    PubMed

    Spitzer, Stephanie; Liss, Julie; Spahr, Tony; Dorman, Michael; Lansford, Kaitlin

    2009-06-01

    Fundamental frequency (F0) variation is one of a number of acoustic cues normal hearing listeners use for guiding lexical segmentation of degraded speech. This study examined whether F0 contour facilitates lexical segmentation by listeners fitted with cochlear implants (CIs). Lexical boundary error patterns elicited under unaltered and flattened F0 conditions were compared across three groups: listeners with conventional CI, listeners with CI and preserved low-frequency acoustic hearing, and normal hearing listeners subjected to CI simulations. Results indicate that all groups attended to syllabic stress cues to guide lexical segmentation, and that F0 contours facilitated performance for listeners with low-frequency hearing.

  9. The time course of learning during a vowel discrimination task by hearing-impaired and masked normal-hearing listeners

    NASA Astrophysics Data System (ADS)

    Davis, Carrie; Kewley-Port, Diane; Coughlin, Maureen

    2002-05-01

    Vowel discrimination was compared between a group of young, well-trained listeners with mild-to-moderate sensorineural hearing impairment (YHI), and a matched group of normal hearing, noise-masked listeners (YNH). Unexpectedly, discrimination of F1 and F2 in the YHI listeners was equal to or better than that observed in YNH listeners in three conditions of similar audibility [Davis et al., J. Acoust. Soc. Am. 109, 2501 (2001)]. However, in the same time interval, the YHI subjects completed an average of 55% more blocks of testing than the YNH group. New analyses were undertaken to examine the time course of learning during the vowel discrimination task, to determine whether performance was affected by number of trials. Learning curves for a set of vowels in the F1 and F2 regions showed no significant differences between the YHI and YNH listeners. Thus while the YHI subjects completed more trials overall, they achieved a level of discrimination similar to that of their normal-hearing peers within the same number of blocks. Implications of discrimination performance in relation to hearing status and listening strategies will be discussed. [Work supported by NIHDCD-02229.

  10. Speak on time! Effects of a musical rhythmic training on children with hearing loss.

    PubMed

    Hidalgo, Céline; Falk, Simone; Schön, Daniele

    2017-08-01

    This study investigates temporal adaptation in speech interaction in children with normal hearing and in children with cochlear implants (CIs) and/or hearing aids (HAs). We also address the question of whether musical rhythmic training can improve these skills in children with hearing loss (HL). Children named pictures presented on the screen in alternation with a virtual partner. Alternation rate (fast or slow) and the temporal predictability (match vs mismatch of stress occurrences) were manipulated. One group of children with normal hearing (NH) and one with HL were tested. The latter group was tested twice: once after 30 min of speech therapy and once after 30 min of musical rhythmic training. Both groups of children (NH and with HL) can adjust their speech production to the rate of alternation of the virtual partner. Moreover, while children with normal hearing benefit from the temporal regularity of stress occurrences, children with HL become sensitive to this manipulation only after rhythmic training. Rhythmic training may help children with HL to structure the temporal flow of their verbal interactions. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. An acoustic analysis of laughter produced by congenitally deaf and normally hearing college students.

    PubMed

    Makagon, Maja M; Funayama, E Sumie; Owren, Michael J

    2008-07-01

    Relatively few empirical data are available concerning the role of auditory experience in nonverbal human vocal behavior, such as laughter production. This study compared the acoustic properties of laughter in 19 congenitally, bilaterally, and profoundly deaf college students and in 23 normally hearing control participants. Analyses focused on degree of voicing, mouth position, air-flow direction, temporal features, relative amplitude, fundamental frequency, and formant frequencies. Results showed that laughter produced by the deaf participants was fundamentally similar to that produced by the normally hearing individuals, which in turn was consistent with previously reported findings. Finding comparable acoustic properties in the sounds produced by deaf and hearing vocalizers confirms the presumption that laughter is importantly grounded in human biology, and that auditory experience with this vocalization is not necessary for it to emerge in species-typical form. Some differences were found between the laughter of deaf and hearing groups; the most important being that the deaf participants produced lower-amplitude and longer-duration laughs. These discrepancies are likely due to a combination of the physiological and social factors that routinely affect profoundly deaf individuals, including low overall rates of vocal fold use and pressure from the hearing world to suppress spontaneous vocalizations.

  12. Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals.

    PubMed

    Auer, E T; Bernstein, L E; Coulter, D C

    1998-10-01

    Four experiments were performed to evaluate a new wearable vibrotactile speech perception aid that extracts fundamental frequency (F0) and displays the extracted F0 as a single-channel temporal or an eight-channel spatio-temporal stimulus. Specifically, we investigated the perception of intonation (i.e., question versus statement) and emphatic stress (i.e., stress on the first, second, or third word) under Visual-Alone (VA), Visual-Tactile (VT), and Tactile-Alone (TA) conditions and compared performance using the temporal and spatio-temporal vibrotactile display. Subjects were adults with normal hearing in experiments I-III and adults with severe to profound hearing impairments in experiment IV. Both versions of the vibrotactile speech perception aid successfully conveyed intonation. Vibrotactile stress information was successfully conveyed, but vibrotactile stress information did not enhance performance in VT conditions beyond performance in VA conditions. In experiment III, which involved only intonation identification, a reliable advantage for the spatio-temporal display was obtained. Differences between subject groups were obtained for intonation identification, with more accurate VT performance by those with normal hearing. Possible effects of long-term hearing status are discussed.

  13. Role of DFNB1 mutations in hereditary hearing loss among assortative mating hearing impaired families from South India.

    PubMed

    Amritkumar, Pavithra; Jeffrey, Justin Margret; Chandru, Jayasankaran; Vanniya S, Paridhy; Kalaimathi, M; Ramakrishnan, Rajagopalan; Karthikeyen, N P; Srikumari Srisailapathy, C R

    2018-06-19

    DFNB1, the first locus to have been associated with deafness, has two major genes GJB2 & GJB6, whose mutations have played vital role in hearing impairment across many ethnicities in the world. In our present study we have focused on the role of these mutations in assortative mating hearing impaired families from south India. One hundred and six assortatively mating hearing impaired (HI) families of south Indian origin comprising of two subsets: 60 deaf marrying deaf (DXD) families and 46 deaf marrying normal hearing (DXN) families were recruited for this study. In the 60 DXD families, 335 members comprising of 118 HI mates, 63 other HI members and 154 normal hearing members and in the 46 DXN families, 281 members comprising of 46 HI and their 43 normal hearing partners, 50 other HI members and 142 normal hearing family members, participated in the molecular study. One hundred and sixty five (165) healthy normal hearing volunteers were recruited as controls for this study. All the participating members were screened for variants in GJB2 and GJB6 genes and the outcome of gene mutations were compared in the subsequent generation in begetting deaf offspring. The DFNB1 allele frequencies for DXD mates and their offspring were 36.98 and 38.67%, respectively and for the DXN mates and their offspring were 22.84 and 24.38%, respectively. There was a 4.6% increase in the subsequent generation in the DXD families, while a 6.75% increase in the DXN families, which demonstrates the role of assortative mating along with consanguinity in the increase of DFNB1 mutations in consecutive generations. Four novel variants, p.E42D (in GJB2 gene), p.Q57R, p.E101Q, p.R104H (in GJB6 gene) were also identified in this study. This is the first study from an Indian subcontinent reporting novel variants in the coding region of GJB6 gene. This is perhaps the first study in the world to test real-time, the hypothesis proposed by Nance et al. in 2000 (intense phenotypic assortative mating mechanism can double the frequency of the commonest forms of recessive deafness [DFNB1]) in assortative mating HI parental generation and their offspring.

  14. Clinical Value of Vestibular Evoked Myogenic Potential in Assessing the Stage and Predicting the Hearing Results in Ménière's Disease.

    PubMed

    Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa; Chung, Won-Ho

    2013-06-01

    Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD.

  15. Hearing impairment in preterm very low birthweight babies detected at term by brainstem auditory evoked responses.

    PubMed

    Jiang, Z D; Brosi, D M; Wilkinson, A R

    2001-12-01

    Seventy preterm babies who were born with a birthweight <1500 g were studied with brainstem auditory evoked responses (BAER) at 37-42 wk of postconceptional age. The data were compared with those of normal term neonates to determine the prevalence of hearing impairment in preterm very low birthweight (VLBW) babies when they reached term. The BAER was recorded with click stimuli at 21 s(-1). Wave I and V latencies increased significantly (ANOVA p < 0.01 and 0.001). I-V and III-V intervals also increased significantly (p < 0.05 and 0.001). Wave V amplitude and V/I amplitude ratio did not differ significantly from those in the normal term controls. Ten of the 70 VLBW babies had a significant elevation in BAER threshold (>30 dB normal hearing level). Eleven had an increase in I-V interval (>2.5 SD above the mean in the normal controls) and one had a decrease in V/I amplitude ratio (<0.45). These results suggest that 14% (10/70) of the VLBW babies had a peripheral hearing impairment and 17% (12/70) a central impairment. Three babies had both an increase in I-V interval and an elevation in BAER threshold, suggesting that 4% (3/70) had both peripheral and central impairments. Thus, the total prevalence of hearing impairment was 27% (19/70). About one in four preterm VLBW babies has peripheral and/or central hearing impairment at term. VLBW and its associated unfavourable perinatal factors predispose the babies to hearing impairment.

  16. Influence of implantable hearing aids and neuroprosthesison music perception.

    PubMed

    Rahne, Torsten; Böhme, Lars; Götze, Gerrit

    2012-01-01

    The identification and discrimination of timbre are essential features of music perception. One dominating parameter within the multidimensional timbre space is the spectral shape of complex sounds. As hearing loss interferes with the perception and enjoyment of music, we approach the individual timbre discrimination skills in individuals with severe to profound hearing loss using a cochlear implant (CI) and normal hearing individuals using a bone-anchored hearing aid (Baha). With a recent developed behavioral test relying on synthetically sounds forming a spectral continuum, the timbre difference was changed adaptively to measure the individual just noticeable difference (JND) in a forced-choice paradigm. To explore the differences in timbre perception abilities caused by the hearing mode, the sound stimuli were varied in their fundamental frequency, thus generating different spectra which are not completely covered by a CI or Baha system. The resulting JNDs demonstrate differences in timbre perception between normal hearing individuals, Baha users, and CI users. Beside the physiological reasons, also technical limitations appear as the main contributing factors.

  17. Does hearing aid use affect audiovisual integration in mild hearing impairment?

    PubMed

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Colonius, Hans

    2018-04-01

    There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.

  18. Informational Masking and Spatial Hearing in Listeners with and without Unilateral Hearing Loss

    ERIC Educational Resources Information Center

    Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.

    2012-01-01

    Purpose: This study assessed selective listening for speech in individuals with and without unilateral hearing loss (UHL) and the potential relationship between spatial release from informational masking and localization ability in listeners with UHL. Method: Twelve adults with UHL and 12 normal-hearing controls completed a series of monaural and…

  19. Using Standardized Psychometric Tests to Identify Learning Disabilities in Students with Sensorineural Hearing Impairments.

    ERIC Educational Resources Information Center

    Sikora, Darryn M.; Plapinger, Donald S.

    1994-01-01

    The use of standardized psychoeducational diagnostic instruments to identify learning disabilities was evaluated with 19 students (ages 7 to 13) with sensorineural hearing impairments. Students with hearing impairment were found to demonstrate learning disabilities with a frequency similar to that found in students with normal hearing, suggesting…

  20. [Bilateral cochlear implants].

    PubMed

    Müller, J

    2017-07-01

    Cochlear implants (CI) are standard for the hearing rehabilitation of severe to profound deafness. Nowadays, if bilaterally indicated, bilateral implantation is usually recommended (in accordance with German guidelines). Bilateral implantation enables better speech discrimination in quiet and in noise, and restores directional and spatial hearing. Children with bilateral CI are able to undergo hearing-based hearing and speech development. Within the scope of their individual possibilities, bilaterally implanted children develop faster than children with unilateral CI and attain, e.g., a larger vocabulary within a certain time interval. Only bilateral implantation allows "binaural hearing," with all the benefits that people with normal hearing profit from, namely: better speech discrimination in quiet and in noise, as well as directional and spatial hearing. Naturally, the developments take time. Binaural CI users benefit from the same effects as normal hearing persons: head shadow effect, squelch effect, and summation and redundancy effects. Sequential CI fitting is not necessarily disadvantageous-both simultaneously and sequentially fitted patients benefit in a similar way. For children, earliest possible fitting and shortest possible interval between the two surgeries seems to positively influence the outcome if bilateral CI are indicated.

  1. Four cases of acoustic neuromas with normal hearing.

    PubMed

    Valente, M; Peterein, J; Goebel, J; Neely, J G

    1995-05-01

    In 95 percent of the cases, patients with acoustic neuromas will have some magnitude of hearing loss in the affected ear. This paper reports on four patients who had acoustic neuromas and normal hearing. Results from the case history, audiometric evaluation, auditory brainstem response (ABR), electroneurography (ENOG), and vestibular evaluation are reported for each patient. For all patients, the presence of unilateral tinnitus was the most common complaint. Audiologically, elevated or absent acoustic reflex thresholds and abnormal ABR findings were the most powerful diagnostic tools.

  2. Multicenter Clinical Trial of the Nucleus® Hybrid™ S8 Cochlear Implant: Final Outcomes

    PubMed Central

    Gantz, Bruce J; Dunn, Camille; Oleson, Jacob; Hansen, Marlan; Parkinson, Aaron; Turner, Christopher

    2015-01-01

    Objective The concept expanding electrical speech processing to those with more residual acoustic hearing with a less invasive shorter cochlear implant has been ongoing since 1999. A multi-center study of the Nucleus Hybrid S8 CI took place between 2002–11. This report describes the final outcomes of this clinical trial. Study Design Multi-Center longitudinal single subject design Methods Eighty-seven subjects received a Nucleus® Hybrid™ S8 implant in their poorer ear. Speech perception in quiet (CNC words) and in noise (BKB-SIN) was collected pre- and post-operatively at 3, 6, and 12 months. Subjective questionnaire data using the APHAB was also collected. Results Some level of hearing preservation was accomplished in 98% subjects with 90% maintaining a functional low-frequency pure-tone average (LFPTA) at initial activation. By 12 months, 5 subjects had total hearing loss and 80% of subjects maintained functional hearing. CNC words demonstrated that 82.5% and 87.5% of subjects had significant improvements in the Hybrid and Combined conditions. The majority of had improvements with BKB-SIN. Results also indicated that as long as subjects maintained at least a severe LFPTA, there was significant improvement in speech understanding. Furthermore, all subjects reported positive improvements in hearing in three of the 4 subscales of the APHAB. Conclusion The concept of hybrid speech processing has significant advantages for subjects with residual low-frequency hearing. In this study, the Nucleus® Hybrid™ S8 provided improved word understanding in quiet and noise. Additionally, there appears to be stability of the residual hearing after initial activation of the device. Level of evidence 2c PMID:26756395

  3. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology

    PubMed Central

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2012-01-01

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance. PMID:26557339

  4. Professionals with hearing loss: maintaining that competitive edge.

    PubMed

    Tye-Murray, Nancy; Spry, Jacqueline L; Mauzé, Elizabeth

    2009-08-01

    The goals of this investigation were to gauge how hearing loss affects the self-perceived job performance and psycho-emotional status of professionals in the workforce and to develop a profile of their aural rehabilitation needs. Forty-eight participants who had at least a high school education and who hold salaried positions participated in one of seven focus groups. Participants first answered questions about a hypothetical executive who had hearing loss and considered how she might react to various communication issues. They then addressed questions about their own work-related predicaments. The sessions were audiovideo recorded and later transcribed for analysis. Unlike workers who have occupational hearing loss, the professionals in this investigation seem not to experience an inordinate degree of stigmatization in their workplaces, although most believe that hearing loss has negatively affected their job performance. Some of the participants believe that they have lost their "competitive edge," and some believe that they have been denied promotions because of hearing loss. However, most report that they have overcome their hearing-related difficulties by various means, and many have developed a determination and stamina to remain active in the workforce. The majority of the participants seemed to be unfamiliar with the Americans with Disability Act, Public Law 101-336. The overriding theme to emerge is that professionals desire to maintain their competency to perform their jobs and will do what they have to do to "get the job done." The situations of professionals who have hearing loss can be modeled, with a central theme of maintaining job competency or a competitive edge. It is hypothesized that five factors affect professionals' abilities to continue their optimal work performance in the face of hearing loss: (a) self-concept and sense of internal locus of control, (b) use of hearing assistive technology, (c) supervisor's and co-workers' perceptions and the provision of accommodations in the workplace, (d) use of effective coping strategies, and (e) communication difficulties and problem situations. The implications that the present findings hold for aural rehabilitation intervention plans are considered, and a problem-solving approach is reviewed.

  5. Assessment of Language Comprehension of 6-Year-Old Deaf Children.

    ERIC Educational Resources Information Center

    Geffner, Donna S.; Freeman, Lisa Rothman

    1980-01-01

    Results show that comprehension of word types (nouns, verbs, etc.) and linguistic structure can be orderly, producing a hierarchy of complexity similar to that found in normally hearing children. However, performance was about three years behind that of normally hearing peers. Journal availability: Elsevier North Holland, Inc., 52 Vanderbilt…

  6. Zebrafish pax5 regulates development of the utricular macula and vestibular function.

    PubMed

    Kwak, Su-Jin; Vemaraju, Shruti; Moorman, Stephen J; Zeddies, David; Popper, Arthur N; Riley, Bruce B

    2006-11-01

    The zebrafish otic vesicle initially forms with only two sensory epithelia, the utricular and saccular maculae, which primarily mediate vestibular and auditory function, respectively. Here, we test the role of pax5, which is preferentially expressed in the utricular macula. Morpholino knockdown of pax5 disrupts vestibular function but not hearing. Neurons of the statoacoustic ganglion (SAG) develop normally. Utricular hair cells appear to form normally but a variable number subsequently undergo apoptosis and are extruded from the otic vesicle. Dendrites of the SAG persist in the utricle but become disorganized after hair cell loss. Hair cells in the saccule develop and survive normally. Otic expression of pax5 requires pax2a and fgf3, mutations in which cause vestibular defects, albeit by distinct mechanisms. Thus, pax5 works in conjunction with fgf3 and pax2a to establish and/or maintain the utricular macula and is essential for vestibular function. (c) 2006 Wiley-Liss, Inc.

  7. The carrier rate and mutation spectrum of genes associated with hearing loss in South China hearing female population of childbearing age

    PubMed Central

    2013-01-01

    Background Given that hearing loss occurs in 1 to 3 of 1,000 live births and approximately 90 to 95 percent of them are born into hearing families, it is of importance and necessity to get better understanding about the carrier rate and mutation spectrum of genes associated with hearing impairment in the general population. Methods 7,263 unrelated women of childbearing age with normal hearing and without family history of hearing loss were tested with allele-specific PCR-based universal array. Further genetic testing were provided to the spouses of the screened carriers. For those couples at risk, multiple choices were provided, including prenatal diagnosis. Results Among the 7,263 normal hearing participants, 303 subjects carried pathogenic mutations included in the screening chip, which made the carrier rate 4.17%. Of the 303 screened carriers, 282 harbored heterozygous mutated genes associated with autosomal recessive hearing loss, and 95 spouses took further genetic tests. 8 out of the 9 couples harbored deafness-causing mutations in the same gene received prenatal diagnosis. Conclusions Given that nearly 90 to 95 percent of deaf and hard-of-hearing babies are born into hearing families, better understanding about the carrier rate and mutation spectrum of genes associated with hearing impairment in the female population of childbearing age may be of importance in carrier screening and genetic counseling. PMID:23718755

  8. The carrier rate and mutation spectrum of genes associated with hearing loss in South China hearing female population of childbearing age.

    PubMed

    Yin, Aihua; Liu, Chang; Zhang, Yan; Wu, Jing; Mai, Mingqin; Ding, Hongke; Yang, Jiexia; Zhang, Xiaozhuang

    2013-05-29

    Given that hearing loss occurs in 1 to 3 of 1,000 live births and approximately 90 to 95 percent of them are born into hearing families, it is of importance and necessity to get better understanding about the carrier rate and mutation spectrum of genes associated with hearing impairment in the general population. 7,263 unrelated women of childbearing age with normal hearing and without family history of hearing loss were tested with allele-specific PCR-based universal array. Further genetic testing were provided to the spouses of the screened carriers. For those couples at risk, multiple choices were provided, including prenatal diagnosis. Among the 7,263 normal hearing participants, 303 subjects carried pathogenic mutations included in the screening chip, which made the carrier rate 4.17%. Of the 303 screened carriers, 282 harbored heterozygous mutated genes associated with autosomal recessive hearing loss, and 95 spouses took further genetic tests. 8 out of the 9 couples harbored deafness-causing mutations in the same gene received prenatal diagnosis. Given that nearly 90 to 95 percent of deaf and hard-of-hearing babies are born into hearing families, better understanding about the carrier rate and mutation spectrum of genes associated with hearing impairment in the female population of childbearing age may be of importance in carrier screening and genetic counseling.

  9. Peripheral hearing loss reduces the ability of children to direct selective attention during multi-talker listening.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2017-07-01

    Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Binaural pitch fusion: Comparison of normal-hearing and hearing-impaired listenersa)

    PubMed Central

    Reiss, Lina A. J.; Shayman, Corey S.; Walker, Emily P.; Bennett, Keri O.; Fowler, Jennifer R.; Hartling, Curtis L.; Glickman, Bess; Lasarev, Michael R.; Oh, Yonghee

    2017-01-01

    Binaural pitch fusion is the fusion of dichotically presented tones that evoke different pitches between the ears. In normal-hearing (NH) listeners, the frequency range over which binaural pitch fusion occurs is usually <0.2 octaves. Recently, broad fusion ranges of 1–4 octaves were demonstrated in bimodal cochlear implant users. In the current study, it was hypothesized that hearing aid (HA) users would also exhibit broad fusion. Fusion ranges were measured in both NH and hearing-impaired (HI) listeners with hearing losses ranging from mild-moderate to severe-profound, and relationships of fusion range with demographic factors and with diplacusis were examined. Fusion ranges of NH and HI listeners averaged 0.17 ± 0.13 octaves and 1.7 ± 1.5 octaves, respectively. In HI listeners, fusion ranges were positively correlated with a principal component measure of the covarying factors of young age, early age of hearing loss onset, and long durations of hearing loss and HA use, but not with hearing threshold, amplification level, or diplacusis. In NH listeners, no correlations were observed with age, hearing threshold, or diplacusis. The association of broad fusion with early onset, long duration of hearing loss suggests a possible role of long-term experience with hearing loss and amplification in the development of broad fusion. PMID:28372056

  11. Children Using Cochlear Implants Capitalize on Acoustical Hearing for Music Perception

    PubMed Central

    Hopyan, Talar; Peretz, Isabelle; Chan, Lisa P.; Papsin, Blake C.; Gordon, Karen A.

    2012-01-01

    Cochlear implants (CIs) electrically stimulate the auditory nerve providing children who are deaf with access to speech and music. Because of device limitations, it was hypothesized that children using CIs develop abnormal perception of musical cues. Perception of pitch and rhythm as well as memory for music was measured by the children’s version of the Montreal Battery of Evaluation of Amusia (MBEA) in 23 unilateral CI users and 22 age-matched children with normal hearing. Children with CIs were less accurate than their normal hearing peers (p < 0.05). CI users were best able to discern rhythm changes (p < 0.01) and to remember musical pieces (p < 0.01). Contrary to expectations, abilities to hear cues in music improved as the age at implantation increased (p < 0.01). Because the children implanted at older ages also had better low frequency hearing prior to cochlear implantation and were able to use this hearing by wearing hearing aids. Access to early acoustical hearing in the lower frequency ranges appears to establish a base for music perception, which can be accessed with later electrical CI hearing. PMID:23133430

  12. Validation of questionnaire-reported hearing with medical records: A report from the Swiss Childhood Cancer Survivor Study

    PubMed Central

    Scheinemann, Katrin; Grotzer, Michael; Kompis, Martin; Kuehni, Claudia E.

    2017-01-01

    Background Hearing loss is a potential late effect after childhood cancer. Questionnaires are often used to assess hearing in large cohorts of childhood cancer survivors and it is important to know if they can provide valid measures of hearing loss. We therefore assessed agreement and validity of questionnaire-reported hearing in childhood cancer survivors using medical records as reference. Procedure In this validation study, we studied 361 survivors of childhood cancer from the Swiss Childhood Cancer Survivor Study (SCCSS) who had been diagnosed after 1989 and had been exposed to ototoxic cancer treatment. Questionnaire-reported hearing was compared to the information in medical records. Hearing loss was defined as ≥ grade 1 according to the SIOP Boston Ototoxicity Scale. We assessed agreement and validity of questionnaire-reported hearing overall and stratified by questionnaire respondents (survivor or parent), sociodemographic characteristics, time between follow-up and questionnaire and severity of hearing loss. Results Questionnaire reports agreed with medical records in 85% of respondents (kappa 0.62), normal hearing was correctly assessed in 92% of those with normal hearing (n = 249), and hearing loss was correctly assessed in 69% of those with hearing loss (n = 112). Sensitivity of the questionnaires was 92%, 74%, and 39% for assessment of severe, moderate and mild bilateral hearing loss; and 50%, 33% and 10% for severe, moderate and mild unilateral hearing loss, respectively. Results did not differ by sociodemographic characteristics of the respondents, and survivor- and parent-reports were equally valid. Conclusions Questionnaires are a useful tool to assess hearing in large cohorts of childhood cancer survivors, but underestimate mild and unilateral hearing loss. Further research should investigate whether the addition of questions with higher sensitivity for mild degrees of hearing loss could improve the results. PMID:28333999

  13. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    ERIC Educational Resources Information Center

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  14. Speech Intelligibility in Persian Hearing Impaired Children with Cochlear Implants and Hearing Aids.

    PubMed

    Rezaei, Mohammad; Emadi, Maryam; Zamani, Peyman; Farahani, Farhad; Lotfi, Gohar

    2017-04-01

    The aim of present study is to evaluate and compare speech intelligibility in hearing impaired children with cochlear implants (CI) and hearing aid (HA) users and children with normal hearing (NH). The sample consisted of 45 Persian-speaking children aged 3 to 5-years-old. They were divided into three groups, and each group had 15, children, children with CI and children using hearing aids in Hamadan. Participants was evaluated by the test of speech intelligibility level. Results of ANOVA on speech intelligibility test showed that NH children had significantly better reading performance than hearing impaired children with CI and HA. Post-hoc analysis, using Scheffe test, indicated that the mean score of speech intelligibility of normal children was higher than the HA and CI groups; but the difference was not significant between mean of speech intelligibility in children with hearing loss that use cochlear implant and those using HA. It is clear that even with remarkabkle advances in HA technology, many hearing impaired children continue to find speech production a challenging problem. Given that speech intelligibility is a key element in proper communication and social interaction, consequently, educational and rehabilitation programs are essential to improve speech intelligibility of children with hearing loss.

  15. The Relationship between the Behavioral Hearing Thresholds and Maximum Bilirubin Levels at Birth in Children with a History of Neonatal Hyperbilirubinemia

    PubMed Central

    Panahi, Rasool; Jafari, Zahra; Sheibanizade, Abdoreza; Salehi, Masoud; Esteghamati, Abdoreza; Hasani, Sara

    2013-01-01

    Introduction: Neonatal hyperbilirubinemia is one of the most important factors affecting the auditory system and can cause sensorineural hearing loss. This study investigated the relationship between behavioral hearing thresholds in children with a history of jaundice and the maximum level of bilirubin concentration in the blood. Materials and Methods: This study was performed on 18 children with a mean age of 5.6 years and with a history of neonatal hyperbilirubinemia. Behavioral hearing thresholds, transient evoked emissions and brainstem evoked responses were evaluated in all children. Results: Six children (33.3%) had normal hearing thresholds and the remaining (66.7%) had some degree of hearing loss. There was no significant relationship (r=-0.28, P=0.09) between the mean total bilirubin levels and behavioral hearing thresholds in all samples. A transient evoked emission was seen only in children with normal hearing thresholds however in eight cases brainstem evoked responses had not detected. Conclusion: Increased blood levels of bilirubin at the neonatal period were potentially one of the causes of hearing loss. There was a lack of a direct relationship between neonatal bilirubin levels and the average hearing thresholds which emphasizes on the necessity of monitoring the various amounts of bilirubin levels. PMID:24303432

  16. High-frequency amplification and sound quality in listeners with normal through moderate hearing loss.

    PubMed

    Ricketts, Todd A; Dittberner, Andrew B; Johnson, Earl E

    2008-02-01

    One factor that has been shown to greatly affect sound quality is audible bandwidth. Provision of gain for frequencies above 4-6 kHz has not generally been supported for groups of hearing aid wearers. The purpose of this study was to determine if preference for bandwidth extension in hearing aid processed sounds was related to the magnitude of hearing loss in individual listeners. Ten participants with normal hearing and 20 participants with mild-to-moderate hearing loss completed the study. Signals were processed using hearing aid-style compression algorithms and filtered using two cutoff frequencies, 5.5 and 9 kHz, which were selected to represent bandwidths that are achievable in modern hearing aids. Round-robin paired comparisons based on the criteria of preferred sound quality were made for 2 different monaurally presented brief sound segments, including music and a movie. Results revealed that preference for either the wider or narrower bandwidth (9- or 5.5-kHz cutoff frequency, respectively) was correlated with the slope of hearing loss from 4 to 12 kHz, with steep threshold slopes associated with preference for narrower bandwidths. Consistent preference for wider bandwidth is present in some listeners with mild-to-moderate hearing loss.

  17. Tone perception in Mandarin-speaking school age children with otitis media with effusion

    PubMed Central

    McPherson, Bradley; Li, Caiwei; Yang, Feng

    2017-01-01

    Objectives The present study explored tone perception ability in school age Mandarin-speaking children with otitis media with effusion (OME) in noisy listening environments. The study investigated the interaction effects of noise, tone type, age, and hearing status on monaural tone perception, and assessed the application of a hierarchical clustering algorithm for profiling hearing impairment in children with OME. Methods Forty-one children with normal hearing and normal middle ear status and 84 children with OME with or without hearing loss participated in this study. The children with OME were further divided into two subgroups based on their severity and pattern of hearing loss using a hierarchical clustering algorithm. Monaural tone recognition was measured using a picture-identification test format incorporating six sets of monosyllabic words conveying four lexical tones under speech spectrum noise, with the signal-to-noise ratio (SNR) conditions ranging from -9 to -21 dB. Results Linear correlation indicated tone recognition thresholds of children with OME were significantly correlated with age and pure tone hearing thresholds at every frequency tested. Children with hearing thresholds less affected by OME performed similarly to their peers with normal hearing. Tone recognition thresholds of children with auditory status more affected by OME were significantly inferior to those of children with normal hearing or with minor hearing loss. Younger children demonstrated poorer tone recognition performance than older children with OME. A mixed design repeated-measure ANCOVA showed significant main effects of listening condition, hearing status, and tone type on tone recognition. Contrast comparisons revealed that tone recognition scores were significantly better under -12 dB SNR than under -15 dB SNR conditions and tone recognition scores were significantly worse under -18 dB SNR than those obtained under -15 dB SNR conditions. Tone 1 was the easiest tone to identify and Tone 3 was the most difficult tone to identify for all participants, when considering -12, -15, and -18 dB SNR as within-subject variables. The interaction effect between hearing status and tone type indicated that children with greater levels of OME-related hearing loss had more impaired tone perception of Tone 1 and Tone 2 compared to their peers with lesser levels of OME-related hearing loss. However, tone perception of Tone 3 and Tone 4 remained similar among all three groups. Tone 2 and Tone 3 were the most perceptually difficult tones for children with or without OME-related hearing loss in all listening conditions. Conclusions The hierarchical clustering algorithm demonstrated usefulness in risk stratification for tone perception deficiency in children with OME-related hearing loss. There was marked impairment in tone perception in noise for children with greater levels of OME-related hearing loss. Monaural lexical tone perception in younger children was more vulnerable to noise and OME-related hearing loss than that in older children. PMID:28829840

  18. Tone perception in Mandarin-speaking school age children with otitis media with effusion.

    PubMed

    Cai, Ting; McPherson, Bradley; Li, Caiwei; Yang, Feng

    2017-01-01

    The present study explored tone perception ability in school age Mandarin-speaking children with otitis media with effusion (OME) in noisy listening environments. The study investigated the interaction effects of noise, tone type, age, and hearing status on monaural tone perception, and assessed the application of a hierarchical clustering algorithm for profiling hearing impairment in children with OME. Forty-one children with normal hearing and normal middle ear status and 84 children with OME with or without hearing loss participated in this study. The children with OME were further divided into two subgroups based on their severity and pattern of hearing loss using a hierarchical clustering algorithm. Monaural tone recognition was measured using a picture-identification test format incorporating six sets of monosyllabic words conveying four lexical tones under speech spectrum noise, with the signal-to-noise ratio (SNR) conditions ranging from -9 to -21 dB. Linear correlation indicated tone recognition thresholds of children with OME were significantly correlated with age and pure tone hearing thresholds at every frequency tested. Children with hearing thresholds less affected by OME performed similarly to their peers with normal hearing. Tone recognition thresholds of children with auditory status more affected by OME were significantly inferior to those of children with normal hearing or with minor hearing loss. Younger children demonstrated poorer tone recognition performance than older children with OME. A mixed design repeated-measure ANCOVA showed significant main effects of listening condition, hearing status, and tone type on tone recognition. Contrast comparisons revealed that tone recognition scores were significantly better under -12 dB SNR than under -15 dB SNR conditions and tone recognition scores were significantly worse under -18 dB SNR than those obtained under -15 dB SNR conditions. Tone 1 was the easiest tone to identify and Tone 3 was the most difficult tone to identify for all participants, when considering -12, -15, and -18 dB SNR as within-subject variables. The interaction effect between hearing status and tone type indicated that children with greater levels of OME-related hearing loss had more impaired tone perception of Tone 1 and Tone 2 compared to their peers with lesser levels of OME-related hearing loss. However, tone perception of Tone 3 and Tone 4 remained similar among all three groups. Tone 2 and Tone 3 were the most perceptually difficult tones for children with or without OME-related hearing loss in all listening conditions. The hierarchical clustering algorithm demonstrated usefulness in risk stratification for tone perception deficiency in children with OME-related hearing loss. There was marked impairment in tone perception in noise for children with greater levels of OME-related hearing loss. Monaural lexical tone perception in younger children was more vulnerable to noise and OME-related hearing loss than that in older children.

  19. Loudness growth in 1/2-octave bands (LGOB)--a procedure for the assessment of loudness.

    PubMed

    Allen, J B; Hall, J L; Jeng, P S

    1990-08-01

    In this paper, a method that has been developed for the assessment and quantification of loudness perception in normal-hearing and hearing-impaired persons is described. The method has been named LGOB, which stands for loudness growth in 1/2-octave bands. The method uses 1/2-octave bands of noise, centered at 0.25, 0.5, 1.0, 2.0, and 4.0 kHz, with subjective levels between a subject's threshold of hearing and the "too loud" level. The noise bands are presented to the subject, randomized over frequency and level, and the subject is asked to respond with a loudness rating (one of: VERY SOFT, SOFT, OK, LOUD, VERY LOUD, TOO LOUD). Subject responses (normal and hearing-impaired) are then compared to the average responses of a group of normal-hearing subjects. This procedure allows one to estimate the subject's loudness growth relative to normals, as a function of frequency and level. The results may be displayed either as isoloudness contours or as recruitment curves. In its present form, the measurements take less than 30 min. The signal presentation and analysis is done using a PC and a PC plug-in board having a digital to analog converter.

  20. Intelligence and Academic Achievement With Asymptomatic Congenital Cytomegalovirus Infection.

    PubMed

    Lopez, Adriana S; Lanzieri, Tatiana M; Claussen, Angelika H; Vinson, Sherry S; Turcich, Marie R; Iovino, Isabella R; Voigt, Robert G; Caviness, A Chantal; Miller, Jerry A; Williamson, W Daniel; Hales, Craig M; Bialek, Stephanie R; Demmler-Harrison, Gail

    2017-11-01

    To examine intelligence, language, and academic achievement through 18 years of age among children with congenital cytomegalovirus infection identified through hospital-based newborn screening who were asymptomatic at birth compared with uninfected infants. We used growth curve modeling to analyze trends in IQ (full-scale, verbal, and nonverbal intelligence), receptive and expressive vocabulary, and academic achievement in math and reading. Separate models were fit for each outcome, modeling the change in overall scores with increasing age for patients with normal hearing ( n = 78) or with sensorineural hearing loss (SNHL) diagnosed by 2 years of age ( n = 11) and controls ( n = 40). Patients with SNHL had full-scale intelligence and receptive vocabulary scores that were 7.0 and 13.1 points lower, respectively, compared with controls, but no significant differences were noted in these scores among patients with normal hearing and controls. No significant differences were noted in scores for verbal and nonverbal intelligence, expressive vocabulary, and academic achievement in math and reading among patients with normal hearing or with SNHL and controls. Infants with asymptomatic congenital cytomegalovirus infection identified through newborn screening with normal hearing by age 2 years do not appear to have differences in IQ, vocabulary or academic achievement scores during childhood, or adolescence compared with uninfected children. Copyright © 2017 by the American Academy of Pediatrics.

  1. Presbycusis, sociocusis, and nosocusis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The establishment of a baseline of normal hearing is investigated through the examination of pure tone hearing level surveys and variables such as age, sociocusis, sex, race, and otological disorders. Mathematical formulae used to predict hearing levels in industrial and nonindustrial surveys is included.

  2. [Music therapy in adults with cochlear implants : Effects on music perception and subjective sound quality].

    PubMed

    Hutter, E; Grapp, M; Argstatter, H

    2016-12-01

    People with severe hearing impairments and deafness can achieve good speech comprehension using a cochlear implant (CI), although music perception often remains impaired. A novel concept of music therapy for adults with CI was developed and evaluated in this study. This study included 30 adults with a unilateral CI following postlingual deafness. The subjective sound quality of the CI was rated using the hearing implant sound quality index (HISQUI) and musical tests for pitch discrimination, melody recognition and timbre identification were applied. As a control 55 normally hearing persons also completed the musical tests. In comparison to normally hearing subjects CI users showed deficits in the perception of pitch, melody and timbre. Specific effects of therapy were observed in the subjective sound quality of the CI, in pitch discrimination into a high and low pitch range and in timbre identification, while general learning effects were found in melody recognition. Music perception shows deficits in CI users compared to normally hearing persons. After individual music therapy in the rehabilitation process, improvements in this delicate area could be achieved.

  3. Individual differences reveal correlates of hidden hearing deficits.

    PubMed

    Bharadwaj, Hari M; Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G

    2015-02-04

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing." Copyright © 2015 the authors 0270-6474/15/352161-12$15.00/0.

  4. Ranking Hearing Aid Input-Output Functions for Understanding Low-, Conversational-, and High-Level Speech in Multitalker Babble

    ERIC Educational Resources Information Center

    Chung, King; Killion, Mead C.; Christensen, Laurel A.

    2007-01-01

    Purpose: To determine the rankings of 6 input-output functions for understanding low-level, conversational, and high-level speech in multitalker babble without manipulating volume control for listeners with normal hearing, flat sensorineural hearing loss, and mildly sloping sensorineural hearing loss. Method: Peak clipping, compression limiting,…

  5. Spoken and Written Narratives in Swedish Children and Adolescents with Hearing Impairment

    ERIC Educational Resources Information Center

    Asker-Arnason, Lena; Akerlund, Viktoria; Skoglund, Cecilia; Ek-Lagergren, Ingela; Wengelin, Asa; Sahlen, Birgitta

    2012-01-01

    Twenty 10- to 18-year-old children and adolescents with varying degrees of hearing impairment (HI) and hearing aids (HA), ranging from mild-moderate to severe, produced picture-elicited narratives in a spoken and written version. Their performance was compared to that of 63 normally hearing (NH) peers within the same age span. The participants…

  6. Vowel Identification by Listeners with Hearing Impairment in Response to Variation in Formant Frequencies

    ERIC Educational Resources Information Center

    Molis, Michelle R.; Leek, Marjorie R.

    2011-01-01

    Purpose: This study examined the influence of presentation level and mild-to-moderate hearing loss on the identification of a set of vowel tokens systematically varying in the frequency locations of their second and third formants. Method: Five listeners with normal hearing (NH listeners) and five listeners with hearing impairment (HI listeners)…

  7. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    ERIC Educational Resources Information Center

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  8. Psychosocial health of cochlear implant users compared to that of adults with and without hearing aids: Results of a nationwide cohort study.

    PubMed

    Bosdriesz, J R; Stam, M; Smits, C; Kramer, S E

    2018-06-01

    This study aimed to examine the psychosocial health status of adult cochlear implant (CI) users, compared to that of hearing aid (HA) users, hearing-impaired adults without hearing aids and normally hearing adults. Cross-sectional observational study, using both self-reported survey data and a speech-in-noise test. Data as collected within the Netherlands Longitudinal Study on Hearing (NL-SH) between September 2011 and June 2016 were used. Data from 1254 Dutch adults (aged 23-74), selected in a convenience sample design, were included for analyses. Psychosocial health measures included emotional and social loneliness, anxiety, depression, distress and somatisation. Psychosocial health, hearing status, use of hearing technology and covariates were measured by self-report; hearing ability was assessed through an online digit triplet speech-in-noise test. After adjusting for the degree of hearing impairment, HA users (N = 418) and hearing-impaired adults (N = 247) had significantly worse scores on emotional loneliness than CI users (N = 37). HA users had significantly higher anxiety scores than CI users in some analyses. Non-significant differences were found between normally hearing (N = 552) and CI users for all psychosocial outcomes. Psychosocial health of CI users is not worse than that of hearing-impaired adults with or without hearing aids. CI users' level of emotional loneliness is even lower than that of their hearing-impaired peers using hearing aids. A possible explanation is that CI patients receive more professional and family support, and guidance along their patient journey than adults who are fitted with hearing aids. © 2017 The Authors. Clinical Otolaryngology Published by John Wiley & Sons Ltd.

  9. A comparison of speech intonation production and perception abilities of Farsi speaking cochlear implanted and normal hearing children.

    PubMed

    Moein, Narges; Khoddami, Seyyedeh Maryam; Shahbodaghi, Mohammad Rahim

    2017-10-01

    Cochlear implant prosthesis facilitates spoken language development and speech comprehension in children with severe-profound hearing loss. However, this prosthesis is limited in encoding information about fundamental frequency and pitch that are essentially for recognition of speech prosody. The purpose of the present study is to investigate the perception and production of intonation in cochlear implant children and comparison with normal hearing children. This study carried out on 25 cochlear implanted children and 50 children with normal hearing. First, using 10 action pictures statements and questions sentences were extracted. Fundamental frequency and pitch changes were identified using Praat software. Then, these sentences were judged by 7 adult listeners. In second stage 20 sentences were played for child and he/she determined whether it was in a question form or statement one. Performance of cochlear implanted children in perception and production of intonation was significantly lower than children with normal hearing. The difference between fundamental frequency and pitch changes in cochlear implanted children and children with normal hearing was significant (P < 0/05). Cochlear implanted children performance in perception and production of intonation has significant correlation with child's age surgery and duration of prosthesis use (P < 0/05). The findings of the current study show that cochlear prostheses have limited application in facilitating the perception and production of intonation in cochlear implanted children. It should be noted that the child's age at the surgery and duration of prosthesis's use is important in reduction of this limitation. According to these findings, speech and language pathologists should consider intervention of intonation in treatment program of cochlear implanted children. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability among Listeners with Normal Hearing Thresholds

    ERIC Educational Resources Information Center

    Shinn-Cunningham, Barbara

    2017-01-01

    Purpose: This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method: The results from neuroscience and psychoacoustics are reviewed. Results: In noisy settings, listeners focus their…

  11. Speech Timing and Working Memory in Profoundly Deaf Children after Cochlear Implantation.

    ERIC Educational Resources Information Center

    Burkholder, Rose A.; Pisoni, David B.

    2003-01-01

    Compared speaking rates, digit span, and speech timing in profoundly deaf 8- and 9-year-olds with cochlear implants and normal-hearing children. Found that deaf children displayed longer sentence durations and pauses during recall and shorter digit spans than normal-hearing children. Articulation rates strongly correlated with immediate memory…

  12. Taxonomic Knowledge of Children with and without Cochlear Implants

    ERIC Educational Resources Information Center

    Lund, Emily; Dinsmoor, Jessica

    2016-01-01

    Purpose: The purpose of this study was to compare the taxonomic vocabulary knowledge and organization of children with cochlear implants to (a) children with normal hearing matched for age, and (b) children matched for vocabulary development. Method: Ten children with cochlear implants, 10 age-matched children with normal hearing, and 10…

  13. Verbal Working Memory in Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Nittrouer, Susan; Caldwell-Tarr, Amanda; Low, Keri E.; Lowenstein, Joanna H.

    2017-01-01

    Purpose: Verbal working memory in children with cochlear implants and children with normal hearing was examined. Participants: Ninety-three fourth graders (47 with normal hearing, 46 with cochlear implants) participated, all of whom were in a longitudinal study and had working memory assessed 2 years earlier. Method: A dual-component model of…

  14. Binaural Advantage for Younger and Older Adults with Normal Hearing

    ERIC Educational Resources Information Center

    Dubno, Judy R.; Ahlstrom, Jayne B.; Horwitz, Amy R.

    2008-01-01

    Purpose: Three experiments measured benefit of spatial separation, benefit of binaural listening, and masking-level differences (MLDs) to assess age-related differences in binaural advantage. Method: Participants were younger and older adults with normal hearing through 4.0 kHz. Experiment 1 compared spatial benefit with and without head shadow.…

  15. Development of a Test of Suprathreshold Acuity in Noise in Brazilian Portuguese: A New Method for Hearing Screening and Surveillance

    PubMed Central

    Vaez, Nara; Desgualdo-Pereira, Liliane; Paglialonga, Alessia

    2014-01-01

    This paper describes the development of a speech-in-noise test for hearing screening and surveillance in Brazilian Portuguese based on the evaluation of suprathreshold acuity performances. The SUN test (Speech Understanding in Noise) consists of a list of intervocalic consonants in noise presented in a multiple-choice paradigm by means of a touch screen. The test provides one out of three possible results: “a hearing check is recommended” (red light), “a hearing check would be advisable” (yellow light), and “no hearing difficulties” (green light) (Paglialonga et al., Comput. Biol. Med. 2014). This novel test was developed in a population of 30 normal hearing young adults and 101 adults with varying degrees of hearing impairment and handicap, including normal hearing. The test had 84% sensitivity and 76% specificity compared to conventional pure-tone screening and 83% sensitivity and 86% specificity to detect disabling hearing impairment. The test outcomes were in line with the degree of self-perceived hearing handicap. The results found here paralleled those reported in the literature for the SUN test and for conventional speech-in-noise measures. This study showed that the proposed test might be a viable method to identify individuals with hearing problems to be referred to further audiological assessment and intervention. PMID:25247181

  16. The effect of tinnitus on some psychoacoustical abilities in individuals with normal hearing sensitivity.

    PubMed

    Jain, Chandni; Sahoo, Jitesh Prasad

    Tinnitus is the perception of a sound without an external source. It can affect auditory perception abilities in individuals with normal hearing sensitivity. The aim of the study was to determine the effect of tinnitus on psychoacoustic abilities in individuals with normal hearing sensitivity. The study was conducted on twenty subjects with tinnitus and twenty subjects without tinnitus. Tinnitus group was again divided into mild and moderate tinnitus based on the tinnitus handicap inventory. Differential limen of intensity, differential limen of frequency, gap detection test, modulation detection thresholds were done through the mlp toolbox in Matlab and speech in noise test was done with the help of Quick SIN in Kannada. RESULTS of the study showed that the clinical group performed poorly in all the tests except for differential limen of intensity. Tinnitus affects aspects of auditory perception like temporal resolution, speech perception in noise and frequency discrimination in individuals with normal hearing. This could be due to subtle changes in the central auditory system which is not reflected in the pure tone audiogram.

  17. Differences in the perceived music pleasantness between monolateral cochlear implanted and normal hearing children assessed by EEG.

    PubMed

    Vecchiato, G; Maglione, A G; Scorpecci, A; Malerba, P; Graziani, I; Cherubino, P; Astolfi, L; Marsella, P; Colosimo, A; Babiloni, Fabio

    2013-01-01

    The perception of the music in cochlear implanted (CI) patients is an important aspect of their quality of life. In fact, the pleasantness of the music perception by such CI patients can be analyzed through a particular analysis of EEG rhythms. Studies on healthy subjects show that exists a particular frontal asymmetry of the EEG alpha rhythm which can be correlated with pleasantness of the perceived stimuli (approach-withdrawal theory). In particular, here we describe differences between EEG activities estimated in the alpha frequency band for a monolateral CI group of children and a normal hearing one during the fruition of a musical cartoon. The results of the present analysis showed that the alpha EEG asymmetry patterns related to the normal hearing group refers to a higher pleasantness perception when compared to the cerebral activity of the monolateral CI patients. In fact, the present results support the statement that a monolateral CI group could perceive the music in a less pleasant way when compared to normal hearing children.

  18. Clinical Value of Vestibular Evoked Myogenic Potential in Assessing the Stage and Predicting the Hearing Results in Ménière's Disease

    PubMed Central

    Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa

    2013-01-01

    Objectives Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. Methods The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. Results In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). Conclusion VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD. PMID:23799160

  19. Syntagmatic and paradigmatic development of cochlear implanted children in comparison with normally hearing peers up to age 7.

    PubMed

    Faes, Jolien; Gillis, Joris; Gillis, Steven

    2015-09-01

    Grammatical development is shown to be delayed in CI children. However, the literature has focussed mainly on one aspect of grammatical development, either morphology or syntax, and on standard tests instead of spontaneous speech. The aim of the present study was to compare grammatical development in the spontaneous speech of Dutch-speaking children with cochlear implants and normally hearing peers. Both syntagmatic and paradigmatic development will be assessed and compared with each other. Nine children with cochlear implants were followed yearly between ages 2 and 7. There was a cross-sectional control group of 10 normally hearing peers at each age. Syntactic development is measured by means of Mean Length of Utterance (MLU), morphological development by means of Mean Size of Paradigm (MSP). This last measure is relatively new in child language research. MLU and MSP of children with cochlear implants lag behind that of their normally hearing peers up to age 4 and up to age 6 respectively. By age 5, CI children catch up on MSP and by age 7 they caught up on MLU. Children with cochlear implants catch up with their normally hearing peers for both measures of syntax and morphology. However, it is shown that inflection is earlier age-appropriate than sentence length in CI children. Possible explanations for this difference in developmental pace are discussed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Hearing Impairment and Incident Dementia: Findings from the English Longitudinal Study of Ageing.

    PubMed

    Davies, Hilary R; Cadar, Dorina; Herbert, Annie; Orrell, Martin; Steptoe, Andrew

    2017-09-01

    To determine whether hearing loss is associated with incident physician-diagnosed dementia in a representative sample. Retrospective cohort study. English Longitudinal Study of Ageing. Adults aged 50 and older. Cross-sectional associations between self-reported (n = 7,865) and objective hearing measures (n = 6,902) and dementia were examined using multinomial-logistic regression. The longitudinal association between self-reported hearing at Wave 2 (2004/05) and cumulative physician-diagnosed dementia up to Wave 7 (2014/15) was modelled using Cox proportional hazards regression. After adjustment for potential confounders, in cross-sectional analysis, participants who had self-reported or objective moderate and poor hearing were more likely to have a dementia diagnosis than those with normal hearing (self-reported: odds ratio OR = 1.6, 95% CI = 1.1-2.4 moderate hearing; OR = 2.6, 95% CI = 1.7-3.9 poor hearing, objective: OR = 1.6, 95% CI = 1.0-2.8 moderate hearing; OR = 4.4, 95% CI = 1.9-9.9 poor hearing). Longitudinally, the hazard of developing dementia was 1.4 (95% CI = 1.0-1.9) times as high in individuals who reported moderate hearing and 1.6 (95% CI = 1.1-2.0) times as high in those who reported poor hearing. Older adults with hearing loss are at greater risk of dementia than those with normal hearing. These findings are consistent with the rationale that correction of hearing loss could help delay the onset of dementia, or that hearing loss itself could serve as a risk indicator for cognitive decline. © 2017, The Authors. The Journal of the American Geriatrics Society published by Wiley Periodicals, Inc. on behalf of The American Geriatrics Society.

  1. Army Hearing Program Talking Points Calendar Year 2015

    DTIC Science & Technology

    2016-12-14

    outside the range of normal hearing sensitivity (greater than 25 dB), CY15 data.  Data: DOEHRS-HC Data Repository , Soldiers who had a DD2215 or...1.  Data: Defense Occupational and Environmental Health Readiness System-Hearing Conservation (DOEHRS-HC) Data Repository , CY15—Army Profile...Soldiers have a hearing loss that required a fit-for-duty (Readiness) evaluation:  An H-3 Hearing Profile.  Data: DOEHRS-HC Data Repository

  2. Hearing speech in music.

    PubMed

    Ekström, Seth-Reino; Borg, Erik

    2011-01-01

    The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC) testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA) noise and speech spectrum-filtered noise (SPN)]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA). The results showed a significant effect of piano performance speed and octave (P<.01). Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01) and SPN (P<.05). Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01), but there were smaller differences between masking conditions (P<.01). It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  3. Auditory-nerve responses predict pitch attributes related to musical consonance-dissonance for normal and impaired hearinga

    PubMed Central

    Bidelman, Gavin M.; Heinz, Michael G.

    2011-01-01

    Human listeners prefer consonant over dissonant musical intervals and the perceived contrast between these classes is reduced with cochlear hearing loss. Population-level activity of normal and impaired model auditory-nerve (AN) fibers was examined to determine (1) if peripheral auditory neurons exhibit correlates of consonance and dissonance and (2) if the reduced perceptual difference between these qualities observed for hearing-impaired listeners can be explained by impaired AN responses. In addition, acoustical correlates of consonance-dissonance were also explored including periodicity and roughness. Among the chromatic pitch combinations of music, consonant intervals∕chords yielded more robust neural pitch-salience magnitudes (determined by harmonicity∕periodicity) than dissonant intervals∕chords. In addition, AN pitch-salience magnitudes correctly predicted the ordering of hierarchical pitch and chordal sonorities described by Western music theory. Cochlear hearing impairment compressed pitch salience estimates between consonant and dissonant pitch relationships. The reduction in contrast of neural responses following cochlear hearing loss may explain the inability of hearing-impaired listeners to distinguish musical qualia as clearly as normal-hearing individuals. Of the neural and acoustic correlates explored, AN pitch salience was the best predictor of behavioral data. Results ultimately show that basic pitch relationships governing music are already present in initial stages of neural processing at the AN level. PMID:21895089

  4. An acoustic analysis of laughter produced by congenitally deaf and normally hearing college students1

    PubMed Central

    Makagon, Maja M.; Funayama, E. Sumie; Owren, Michael J.

    2008-01-01

    Relatively few empirical data are available concerning the role of auditory experience in nonverbal human vocal behavior, such as laughter production. This study compared the acoustic properties of laughter in 19 congenitally, bilaterally, and profoundly deaf college students and in 23 normally hearing control participants. Analyses focused on degree of voicing, mouth position, air-flow direction, temporal features, relative amplitude, fundamental frequency, and formant frequencies. Results showed that laughter produced by the deaf participants was fundamentally similar to that produced by the normally hearing individuals, which in turn was consistent with previously reported findings. Finding comparable acoustic properties in the sounds produced by deaf and hearing vocalizers confirms the presumption that laughter is importantly grounded in human biology, and that auditory experience with this vocalization is not necessary for it to emerge in species-typical form. Some differences were found between the laughter of deaf and hearing groups; the most important being that the deaf participants produced lower-amplitude and longer-duration laughs. These discrepancies are likely due to a combination of the physiological and social factors that routinely affect profoundly deaf individuals, including low overall rates of vocal fold use and pressure from the hearing world to suppress spontaneous vocalizations. PMID:18646991

  5. Children with unilateral hearing loss may have lower intelligence quotient scores: A meta-analysis.

    PubMed

    Purcell, Patricia L; Shinn, Justin R; Davis, Greg E; Sie, Kathleen C Y

    2016-03-01

    In this meta-analysis, we reviewed observational studies investigating differences in intelligence quotient (IQ) scores of children with unilateral hearing loss compared to children with normal hearing. PubMed Medline, Cumulative Index to Nursing and Allied Health Literature, Embase, PsycINFO. A query identified all English-language studies related to pediatric unilateral hearing loss published between January 1980 and December 2014. Titles, abstracts, and articles were reviewed to identify observational studies reporting IQ scores. There were 261 unique titles, with 29 articles undergoing full review. Four articles were identified, which included 173 children with unilateral hearing loss and 202 children with normal hearing. Ages ranged from 6 to 18 years. Three studies were conducted in the United States and one in Mexico. All were of high quality. All studies reported full-scale IQ results; three reported verbal IQ results; and two reported performance IQ results. Children with unilateral hearing loss scored 6.3 points lower on full-scale IQ, 95% confidence interval (CI) [-9.1, -3.5], P value < 0.001; and 3.8 points lower on performance IQ, 95% CI [-7.3, -0.2], P value 0.04. When investigating verbal IQ, we detected substantial heterogeneity among studies; exclusion of the outlying study resulted in significant difference in verbal IQ of 4 points, 95% CI [-7.5, -0.4], P value 0.028. This meta-analysis suggests children with unilateral hearing loss have lower full-scale and performance IQ scores than children with normal hearing. There also may be disparity in verbal IQ scores. Laryngoscope, 126:746-754, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  6. Relations Between Self-Reported Daily-Life Fatigue, Hearing Status, and Pupil Dilation During a Speech Perception in Noise Task.

    PubMed

    Wang, Yang; Naylor, Graham; Kramer, Sophia E; Zekveld, Adriana A; Wendt, Dorothea; Ohlenforst, Barbara; Lunner, Thomas

    People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil dilation. To the best of our knowledge, this is the first study to investigate the relationship between a subjective measure of daily-life fatigue and an objective measure of pupil dilation, as an indicator of listening effort. These findings help to provide an empirical link between pupil responses, as observed in the laboratory, and daily-life fatigue.

  7. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

    PubMed Central

    Newman, Rochelle S.; Goupell, Matthew J.

    2017-01-01

    Purpose Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Method Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing. Results CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information. Conclusion CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal. PMID:28395319

  8. [Access by hearing-disabled individuals to health services in a southern Brazilian city].

    PubMed

    Freire, Daniela Buchrieser; Gigante, Luciana Petrucci; Béria, Jorge Umberto; Palazzo, Lílian dos Santos; Figueiredo, Andréia Cristina Leal; Raymann, Beatriz Carmen Warth

    2009-04-01

    This cross-sectional study aimed to compare access to health services and preventive measures by persons with hearing disability and those with normal hearing in Canoas, Rio Grande do Sul State, Brazil. The sample included 1,842 individuals 15 years or older (52.9% of whom were females). The most frequent income bracket was twice the minimum wage or more, or approximately U$360/month (42.7%). Individuals with hearing disability were more likely to have visited a physician in the previous two months (PR = 1.3, 95%CI: 1.10-1.51) and to have been hospitalized in the previous 12 months (PR = 2.1, 95%CI: 1.42-3.14). Regarding mental health, individuals with hearing disability showed 1.5 times greater probability of health care due to mental disorders and 4.2 times greater probability of psychiatric hospitalization as compared to those with normal hearing. Consistent with other studies, women with hearing disability performed less breast self-examination and had fewer Pap smears. The data indicate the need to invest in specific campaigns for this group of individuals with special needs.

  9. Descending projections from the inferior colliculus to medial olivocochlear efferents: Mice with normal hearing, early onset hearing loss, and congenital deafness.

    PubMed

    Suthakar, Kirupa; Ryugo, David K

    2017-01-01

    Auditory efferent neurons reside in the brain and innervate the sensory hair cells of the cochlea to modulate incoming acoustic signals. Two groups of efferents have been described in mouse and this report will focus on the medial olivocochlear (MOC) system. Electrophysiological data suggest the MOC efferents function in selective listening by differentially attenuating auditory nerve fiber activity in quiet and noisy conditions. Because speech understanding in noise is impaired in age-related hearing loss, we asked whether pathologic changes in input to MOC neurons from higher centers could be involved. The present study investigated the anatomical nature of descending projections from the inferior colliculus (IC) to MOCs in 3-month old mice with normal hearing, and in 6-month old mice with normal hearing (CBA/CaH), early onset progressive hearing loss (DBA/2), and congenital deafness (homozygous Shaker-2). Anterograde tracers were injected into the IC and retrograde tracers into the cochlea. Electron microscopic analysis of double-labelled tissue confirmed direct synaptic contact from the IC onto MOCs in all cohorts. These labelled terminals are indicative of excitatory neurotransmission because they contain round synaptic vesicles, exhibit asymmetric membrane specializations, and are co-labelled with antibodies against VGlut2, a glutamate transporter. 3D reconstructions of the terminal fields indicate that in normal hearing mice, descending projections from the IC are arranged tonotopically with low frequencies projecting laterally and progressively higher frequencies projecting more medially. Along the mediolateral axis, the projections of DBA/2 mice with acquired high frequency hearing loss were shifted medially towards expected higher frequency projecting regions. Shaker-2 mice with congenital deafness had a much broader spatial projection, revealing abnormalities in the topography of connections. These data suggest that loss in precision of IC directed MOC activation could contribute to impaired signal detection in noise. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. The Effects of Age and Hearing Loss on Tasks of Perception and Production of Intonation.

    ERIC Educational Resources Information Center

    Most, Tova; Frank, Yael

    1994-01-01

    Hearing-impaired and normal hearing children in 2 age groups (5-6 years and 9-12 years) were observed for possible differences in their perception and production of intonation. Results indicated that imitation of intonation carried on nonsense syllables was not affected by age. Hearing-impaired subjects scored much lower than controls in imitating…

  11. Talker Differences in Clear and Conversational Speech: Vowel Intelligibility for Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Ferguson, Sarah Hargus

    2012-01-01

    Purpose: To establish the range of talker variability for vowel intelligibility in clear versus conversational speech for older adults with hearing loss and to determine whether talkers who produced a clear speech benefit for young listeners with normal hearing also did so for older adults with hearing loss. Method: Clear and conversational vowels…

  12. Lipreading in School-Age Children: The Roles of Age, Hearing Status, and Cognitive Ability

    ERIC Educational Resources Information Center

    Tye-Murray, Nancy; Hale, Sandra; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S.

    2014-01-01

    Purpose: The study addressed three research questions: Does lipreading improve between the ages of 7 and 14 years? Does hearing loss affect the development of lipreading? How do individual differences in lipreading relate to other abilities? Method: Forty children with normal hearing (NH) and 24 with hearing loss (HL) were tested using 4…

  13. The Use of Standardized Test Batteries in Assessing the Skill Development of Children with Mild-to-Moderate Sensorineural Hearing Loss.

    ERIC Educational Resources Information Center

    Plapinger, Donald S.; Sikora, Darryn M.

    1995-01-01

    This study of 12 children (ages 7-13) with mild to moderate bilateral sensorineural hearing loss found that psychoeducational diagnostic tests standardized on students with normal hearing may be used with confidence to assess both cognitive and academic levels of functioning in students with sensorineural hearing loss. (Author/JDD)

  14. Fitting and verification of frequency modulation systems on children with normal hearing.

    PubMed

    Schafer, Erin C; Bryant, Danielle; Sanders, Katie; Baldus, Nicole; Algier, Katherine; Lewis, Audrey; Traber, Jordan; Layden, Paige; Amin, Aneeqa

    2014-06-01

    Several recent investigations support the use of frequency modulation (FM) systems in children with normal hearing and auditory processing or listening disorders such as those diagnosed with auditory processing disorders, autism spectrum disorders, attention-deficit hyperactivity disorder, Friedreich ataxia, and dyslexia. The American Academy of Audiology (AAA) published suggested procedures, but these guidelines do not cite research evidence to support the validity of the recommended procedures for fitting and verifying nonoccluding open-ear FM systems on children with normal hearing. Documenting the validity of these fitting procedures is critical to maximize the potential FM-system benefit in the above-mentioned populations of children with normal hearing and those with auditory-listening problems. The primary goal of this investigation was to determine the validity of the AAA real-ear approach to fitting FM systems on children with normal hearing. The secondary goal of this study was to examine speech-recognition performance in noise and loudness ratings without and with FM systems in children with normal hearing sensitivity. A two-group, cross-sectional design was used in the present study. Twenty-six typically functioning children, ages 5-12 yr, with normal hearing sensitivity participated in the study. Participants used a nonoccluding open-ear FM receiver during laboratory-based testing. Participants completed three laboratory tests: (1) real-ear measures, (2) speech recognition performance in noise, and (3) loudness ratings. Four real-ear measures were conducted to (1) verify that measured output met prescribed-gain targets across the 1000-4000 Hz frequency range for speech stimuli, (2) confirm that the FM-receiver volume did not exceed predicted uncomfortable loudness levels, and (3 and 4) measure changes to the real-ear unaided response when placing the FM receiver in the child's ear. After completion of the fitting, speech recognition in noise at a -5 signal-to-noise ratio and loudness ratings at a +5 signal-to-noise ratio were measured in four conditions: (1) no FM system, (2) FM receiver on the right ear, (3) FM receiver on the left ear, and (4) bilateral FM system. The results of this study suggested that the slightly modified AAA real-ear measurement procedures resulted in a valid fitting of one FM system on children with normal hearing. On average, prescriptive targets were met for 1000, 2000, 3000, and 4000 Hz within 3 dB, and maximum output of the FM system never exceeded and was significantly lower than predicted uncomfortable loudness levels for the children. There was a minimal change in the real-ear unaided response when the open-ear FM receiver was placed into the ear. Use of the FM system on one or both ears resulted in significantly better speech recognition in noise relative to a no-FM condition, and the unilateral and bilateral FM receivers resulted in a comfortably loud signal when listening in background noise. Real-ear measures are critical for obtaining an appropriate fit of an FM system on children with normal hearing. American Academy of Audiology.

  15. Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention

    PubMed Central

    Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.

    2018-01-01

    Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752

  16. Binaural fusion and listening effort in children who use bilateral cochlear implants: a psychoacoustic and pupillometric study.

    PubMed

    Steel, Morrison M; Papsin, Blake C; Gordon, Karen A

    2015-01-01

    Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this "binaural fusion" reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.

  17. Setting the time and place for a hearing before an administrative law judge. Final rules.

    PubMed

    2010-07-08

    We are amending our rules to state that our agency is responsible for setting the time and place for a hearing before an administrative law judge (ALJ). This change creates a 3-year pilot program that will allow us to test this new authority. Our use of this authority, consistent with due process rights of claimants, may provide us with greater flexibility in scheduling both in-person and video hearings, lead to improved efficiency in our hearing process, and reduce the number of pending hearing requests. This change is a part of our broader commitment to maintaining a hearing process that results in accurate, high-quality decisions for claimants.

  18. Processing of phonological variation in children with hearing loss: compensation for English place assimilation in connected speech.

    PubMed

    Skoruppa, Katrin; Rosen, Stuart

    2014-06-01

    In this study, the authors explored phonological processing in connected speech in children with hearing loss. Specifically, the authors investigated these children's sensitivity to English place assimilation, by which alveolar consonants like t and n can adapt to following sounds (e.g., the word ten can be realized as tem in the phrase ten pounds). Twenty-seven 4- to 8-year-old children with moderate to profound hearing impairments, using hearing aids (n = 10) or cochlear implants (n = 17), and 19 children with normal hearing participated. They were asked to choose between pictures of familiar (e.g., pen) and unfamiliar objects (e.g., astrolabe) after hearing t- and n-final words in sentences. Standard pronunciations (Can you find the pen dear?) and assimilated forms in correct (… pem please?) and incorrect contexts (… pem dear?) were presented. As expected, the children with normal hearing chose the familiar object more often for standard forms and correct assimilations than for incorrect assimilations. Thus, they are sensitive to word-final place changes and compensate for assimilation. However, the children with hearing impairment demonstrated reduced sensitivity to word-final place changes, and no compensation for assimilation. Restricted analyses revealed that children with hearing aids who showed good perceptual skills compensated for assimilation in plosives only.

  19. A comparison of vowel productions in prelingually deaf children using cochlear implants, severe hearing-impaired children using conventional hearing aids and normal-hearing children.

    PubMed

    Baudonck, Nele; Van Lierde, K; Dhooge, I; Corthals, P

    2011-01-01

    The purpose of this study was to compare vowel productions by deaf cochlear implant (CI) children, hearing-impaired hearing aid (HA) children and normal-hearing (NH) children. 73 children [mean age: 9;14 years (years;months)] participated: 40 deaf CI children, 34 moderately to profoundly hearing-impaired HA children and 42 NH children. For the 3 corner vowels [a], [i] and [u], F(1), F(2) and the intrasubject SD were measured using the Praat software. Spectral separation between these vowel formants and vowel space were calculated. The significant effects in the CI group all pertain to a higher intrasubject variability in formant values, whereas the significant effects in the HA group all pertain to lower formant values. Both hearing-impaired subgroups showed a tendency toward greater intervowel distances and vowel space. Several subtle deviations in the vowel production of deaf CI children and hearing-impaired HA children could be established, using a well-defined acoustic analysis. CI children as well as HA children in this study tended to overarticulate, which hypothetically can be explained by a lack of auditory feedback and an attempt to compensate it by proprioceptive feedback during articulatory maneuvers. Copyright © 2010 S. Karger AG, Basel.

  20. Validation of the Korean Version of the Spatial Hearing Questionnaire for Assessing the Severity and Symmetry of Hearing Impairment.

    PubMed

    Kong, Tae Hoon; Park, Yoon Ah; Bong, Jeong Pyo; Park, Sang Yoo

    2017-07-01

    Spatial hearing refers to the ability to understand speech and identify sounds in various environments. We assessed the validity of the Korean version of the Spatial Hearing Questionnaire (K-SHQ). We performed forward translation of the original English SHQ to Korean and backward translation from the Korean to English. Forty-eight patients who were able to read and understand Korean and received a score of 24 or higher on the Mini-Mental Status Examination were included in the study. Patients underwent pure tone audiometry (PTA) using a standard protocol and completed the K-SHQ. Internal consistency was evaluated using Cronbach's alpha, and factor analysis was performed to prove reliability. Construct validity was tested by comparing K-SHQ scores from patients with normal hearing to those with hearing impairment. Scores were compared between subjects with unilateral or bilateral hearing loss and between symmetrical and asymmetrical hearing impairment. Cronbach's alpha showed good internal consistency (0.982). Two factors were identified by factor analysis: There was a significant difference in K-SHQ scores for patients with normal hearing compared to those with hearing impairment. Patients with asymmetric hearing impairment had higher K-SHQ scores than those with symmetric hearing impairment. This is related to a lower threshold of PTA in the better ear of subjects. The hearing ability of the better ear is correlated with K-SHQ score. The K-SHQ is a reliable and valid tool with which to assess spatial hearing in patients who speak and read Korean. K-SHQ score reflects the severity and symmetry of hearing impairment. © Copyright: Yonsei University College of Medicine 2017

  1. Development of Spatial Release from Masking in Mandarin-Speaking Children with Normal Hearing

    ERIC Educational Resources Information Center

    Yuen, Kevin C. P.; Yuan, Meng

    2014-01-01

    Purpose: This study investigated the development of spatial release from masking in children using closed-set Mandarin disyllabic words and monosyllabic words carrying lexical tones as test stimuli and speech spectrum-weighted noise as a masker. Method: Twenty-six children ages 4-9 years and 12 adults, all with normal hearing, participated in…

  2. Nonword Repetition by Children with Cochlear Implants: Accuracy Ratings from Normal-Hearing Listeners.

    ERIC Educational Resources Information Center

    Dillon, Caitlin M.; Burkholder, Rose A.; Cleary, Miranda; Pisoni, David B.

    2004-01-01

    Seventy-six children with cochlear implants completed a nonword repetition task. The children were presented with 20 nonword auditory patterns over a loudspeaker and were asked to repeat them aloud to the experimenter. The children's responses were recorded on digital audiotape and then played back to normal-hearing adult listeners to obtain…

  3. Domain Specificity and Everyday Biological, Physical, and Psychological Thinking in Normal, Autistic, and Deaf Children.

    ERIC Educational Resources Information Center

    Peterson, Candida C.; Siegal, Michael

    1997-01-01

    Examined reasoning in normal, autistic, and deaf individuals. Found that deaf individuals who grow up in hearing homes without fluent signers show selective impairments in theory of mind similar to those of autistic individuals. Results suggest that conversational differences in the language children hear accounts for distinctive patterns of…

  4. Speech Perception with Music Maskers by Cochlear Implant Users and Normal-Hearing Listeners

    ERIC Educational Resources Information Center

    Eskridge, Elizabeth N.; Galvin, John J., III; Aronoff, Justin M.; Li, Tianhao; Fu, Qian-Jie

    2012-01-01

    Purpose: The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. Method: Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their…

  5. Effects of hearing loss on speech recognition under distracting conditions and working memory in the elderly.

    PubMed

    Na, Wondo; Kim, Gibbeum; Kim, Gungu; Han, Woojae; Kim, Jinsook

    2017-01-01

    The current study aimed to evaluate hearing-related changes in terms of speech-in-noise processing, fast-rate speech processing, and working memory; and to identify which of these three factors is significantly affected by age-related hearing loss. One hundred subjects aged 65-84 years participated in the study. They were classified into four groups ranging from normal hearing to moderate-to-severe hearing loss. All the participants were tested for speech perception in quiet and noisy conditions and for speech perception with time alteration in quiet conditions. Forward- and backward-digit span tests were also conducted to measure the participants' working memory. 1) As the level of background noise increased, speech perception scores systematically decreased in all the groups. This pattern was more noticeable in the three hearing-impaired groups than in the normal hearing group. 2) As the speech rate increased faster, speech perception scores decreased. A significant interaction was found between speed of speech and hearing loss. In particular, 30% of compressed sentences revealed a clear differentiation between moderate hearing loss and moderate-to-severe hearing loss. 3) Although all the groups showed a longer span on the forward-digit span test than the backward-digit span test, there was no significant difference as a function of hearing loss. The degree of hearing loss strongly affects the speech recognition of babble-masked and time-compressed speech in the elderly but does not affect the working memory. We expect these results to be applied to appropriate rehabilitation strategies for hearing-impaired elderly who experience difficulty in communication.

  6. Otitis media and hearing loss among 12-16-year-old Inuit of Inukjuak, Quebec, Canada.

    PubMed

    Ayukawa, Hannah; Bruneau, Suzanne; Proulx, Jean-François; Macarthur, Judy; Baxter, James

    2004-01-01

    Chronic otitis media (COM) and associated hearing loss is a frequent problem for many Inuit children in Canada. In this study, we evaluated individuals aged 12-16 years living in Inukjuak, to determine the prevalence of middle ear disease and hearing loss, and the effect of hearing loss on academic performance. Otological examination, hearing test, medical and school file review were performed in November 1997. 88 individuals were seen. Otological examination revealed maximal scarring in 1.8%, minimal scarring in 34.9%, normal eardrums in 49.1% and chronic otitis media in 16.9%. There were 62 individuals whose ear exams could be directly compared with a previous exam done in 1987. Of those, there were three ears that had developed COM and 4/13 ears with COM in 1987 that had healed. Hearing tests found bilateral normal hearing in 80% (PTA <20dB), unilateral loss in 15% and bilateral loss in 5%. Hearing loss was associated with poorer academic performance in Language (p<.05). A similar trend was found in Mathematics but not in Inuttitut. Chronic otitis media remains a significant problem among the Inuit, with a prevalence of 16.9% in individuals aged 12-16 years. One in five in this age group has hearing loss, and this hearing loss impacts on academic performance.

  7. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.

    PubMed

    Kidd, Gerald

    2017-10-17

    Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.

  8. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    PubMed Central

    2017-01-01

    Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603

  9. Social inclusion and career development--transition from upper secondary school to work or post-secondary education among hard of hearing students.

    PubMed

    Danermark, B; Antonson, S; Lundström, I

    2001-01-01

    The aim of this study was to investigate the decision process and to analyse the mechanisms involved in the transition from upper secondary education to post-secondary education or the labour market. Sixteen students with sensorioneural hearing loss were selected. Among these eight of the students continued to university and eight did not. Twenty-five per cent of the students were women and the average age was 28 years. The investigation was conducted about 5 years after graduation from the upper secondary school. Both quantitative and qualitative methods were used. The results showed that none of the students came from a family where any or both of the parents had a university or comparable education. The differences in choice between the two groups cannot be explained in terms of social inheritance. Our study indicates that given normal intellectual capacity the level of the hearing loss seems to have no predictive value regarding future educational performance and academic career. The conclusion is that it is of great importance that a hearing impaired pupil with normal intellectual capacity is encouraged and guided to choose an upper secondary educational programme which is orientated towards post-secondary education (instead of a narrow vocational programme). Additional to their hearing impairment and related educational problems, hard of hearing students have much more difficulty than normal hearing peers in coping with changes in intentions and goals regarding their educational career during their upper secondary education.

  10. Affective Properties of Mothers' Speech to Infants With Hearing Impairment and Cochlear Implants

    PubMed Central

    Bergeson, Tonya R.; Xu, Huiping; Kitamura, Christine

    2015-01-01

    Purpose The affective properties of infant-directed speech influence the attention of infants with normal hearing to speech sounds. This study explored the affective quality of maternal speech to infants with hearing impairment (HI) during the 1st year after cochlear implantation as compared to speech to infants with normal hearing. Method Mothers of infants with HI and mothers of infants with normal hearing matched by age (NH-AM) or hearing experience (NH-EM) were recorded playing with their infants during 3 sessions over a 12-month period. Speech samples of 25 s were low-pass filtered, leaving intonation but not speech information intact. Sixty adults rated the stimuli along 5 scales: positive/negative affect and intention to express affection, to encourage attention, to comfort/soothe, and to direct behavior. Results Low-pass filtered speech to HI and NH-EM groups was rated as more positive, affective, and comforting compared with the such speech to the NH-AM group. Speech to infants with HI and with NH-AM was rated as more directive than speech to the NH-EM group. Mothers decreased affective qualities in speech to all infants but increased directive qualities in speech to infants with NH-EM over time. Conclusions Mothers fine-tune communicative intent in speech to their infant's developmental stage. They adjust affective qualities to infants' hearing experience rather than to chronological age but adjust directive qualities of speech to the chronological age of their infants. PMID:25679195

  11. Comparison of characteristics observed in tinnitus patients with unilateral vs bilateral symptoms, with both normal hearing threshold and distortion-product otoacoustic emissions.

    PubMed

    Zagólski, Olaf; Stręk, Paweł

    2017-02-01

    Tinnitus characteristics in normal-hearing patients differ between the groups with unilateral and bilateral complaints. The study was to determine the differences between tinnitus characteristics observed in patients with unilateral vs bilateral symptoms and normal hearing threshold, as well as normal results of distortion-product otoacoustic emissions (DPOAEs). The patients answered questions concerning tinnitus duration, laterality, character, accompanying symptoms, and circumstances of onset. The results of tympanometry, auditory brainstem responses, tinnitus likeness spectrum, minimum masking level (MML), and uncomfortable loudness level were evaluated. Records of 380 tinnitus sufferers were examined. Patients with abnormal audiograms and/or DPOAEs were excluded. The remaining 66 participants were divided into groups with unilateral and bilateral tinnitus. Unilateral tinnitus in normal-hearing patients was diagnosed twice more frequently than bilateral. Tinnitus pitch was higher in the group with bilateral tinnitus (p < .001). MML was lower in unilateral tinnitus (p < .05). Mean age of patients was higher in the unilateral tinnitus group (p < .05). Mean tinnitus duration was longer (p < .05) and hypersensitivity to sound was more frequent (p < .05) in the bilateral tinnitus group. Repeated exposure to excessive noise was the most frequent cause in the bilateral tinnitus group.

  12. Comparison of distortion product otoacoustic emissions with auditory brain-stem response for clinical use in neonatal intensive care unit.

    PubMed

    Ochi, A; Yasuhara, A; Kobayashi, Y

    1998-11-01

    This study compares the clinical usefulness of distortion product otoacoustic emissions (DPOAEs) with the auditory brain-stem response (ABR) for neonates in the neonatal intensive care unit for the evaluation of hearing impairment. Both DPOAEs and ABR were performed on 36 neonates (67 ears) on the same day. We defined neonates as having normal hearing when the thresholds of wave V of ABR were < or =45 dB hearing level. (1) We could not obtain DPOAEs at f2 = 977 Hz in neonates with normal hearing because of high noise floors. DPOAE recording time was 36 min shorter than that of ABR. (2) We defined as normal DPOAEs, the number of frequencies which showed the DPgram-noise floor > or =4 dB was > or =4 at 6 f2 frequencies, from 1416 Hz to 7959 Hz. (3) Normal thresholds of ABR and normal DPOAEs showed the same percentages, i.e. 68.7%, but the percentage of different results between ABR and DPOAEs was 6.0%. Our study indicates that DPOAEs represent a simple procedure, which can be easily performed in the NICU to obtain reliable results in high-risk neonates. Results obtained by DPOAEs were comparable to those obtained by the more complex procedure of ABR.

  13. Describing the trajectory of language development in the presence of severe-to-profound hearing loss: a closer look at children with cochlear implants versus hearing aids.

    PubMed

    Yoshinaga-Itano, Christine; Baca, Rosalinda L; Sedey, Allison L

    2010-10-01

    The objective of this investigation was to describe the language growth of children with severe or profound hearing loss with cochlear implants versus those children with the same degree of hearing loss using hearing aids. A prospective longitudinal observation and analysis. University of Colorado Department of Speech Language and Hearing Sciences. There were 87 children with severe-to-profound hearing loss from 48 to 87 months of age. All children received early intervention services through the Colorado Home Intervention Program. Most children received intervention services from a certified auditory-verbal therapist or an auditory-oral therapist and weekly sign language instruction from an instructor who was deaf or hard of hearing and native or fluent in American Sign Language. The Test of Auditory Comprehension of Language, 3rd Edition, and the Expressive One Word Picture Vocabulary Test, 3rd Edition, were the assessment tools for children 4 to 7 years of age. The expressive language subscale of the Minnesota Child Development was used in the infant/toddler period (birth to 36 mo). Average language estimates at 84 months of age were nearly identical to the normative sample for receptive language and 7 months delayed for expressive vocabulary. Children demonstrated a mean rate of growth from 4 years through 7 years on these 2 assessments that was equivalent to their normal-hearing peers. As a group, children with hearing aids deviated more from the age equivalent trajectory on the Test of Auditory Comprehension of Language, 3rd Edition, and the Expressive One Word Picture Vocabulary Test, 3rd Edition, than children with cochlear implants. When a subset of children were divided into performance categories, we found that children with cochlear implants were more likely to be "gap closers" and less likely to be "gap openers," whereas the reverse was true for the children with hearing aids for both measures. Children who are educated through oral-aural combined with sign language instruction can achieve age-appropriate language levels on expressive vocabulary and receptive syntax ages 4 through 7 years. However, it is easier to maintain a constant rate of development rather than to accelerate from birth through 84 months of age, which represented approximately 80% of our sample. However, acceleration of language development is possible in some children and could result from cochlear implantation.

  14. Maternal Distancing Strategies toward Twin Sons, One with Mild Hearing Loss: A Case Study

    ERIC Educational Resources Information Center

    Munoz-Silva, Alicia; Sanchez-Garcia, Manuel

    2004-01-01

    The authors apply descriptive and sequential analyses to a mother's distancing strategies toward her 3-year-old twin sons in puzzle assembly and book reading tasks. One boy had normal hearing and the other a mild hearing loss (threshold: 30 dB). The results show that the mother used more distancing behaviors with the son with a hearing loss, and…

  15. Effects of a cochlear implant simulation on immediate memory in normal-hearing adults

    PubMed Central

    Burkholder, Rose A.; Pisoni, David B.; Svirsky, Mario A.

    2012-01-01

    This study assessed the effects of stimulus misidentification and memory processing errors on immediate memory span in 25 normal-hearing adults exposed to degraded auditory input simulating signals provided by a cochlear implant. The identification accuracy of degraded digits in isolation was measured before digit span testing. Forward and backward digit spans were shorter when digits were degraded than when they were normal. Participants’ normal digit spans and their accuracy in identifying isolated digits were used to predict digit spans in the degraded speech condition. The observed digit spans in degraded conditions did not differ significantly from predicted digit spans. This suggests that the decrease in memory span is related primarily to misidentification of digits rather than memory processing errors related to cognitive load. These findings provide complementary information to earlier research on auditory memory span of listeners exposed to degraded speech either experimentally or as a consequence of a hearing-impairment. PMID:16317807

  16. The role of auditory and kinaesthetic feedback mechanisms on phonatory stability in children.

    PubMed

    Rathna Kumar, S B; Azeem, Suhail; Choudhary, Abhishek Kumar; Prakash, S G R

    2013-12-01

    Auditory feedback plays an important role in phonatory control. When auditory feedback is disrupted, various changes are observed in vocal motor control. Vocal intensity and fundamental frequency (F0) levels tend to increase in response to auditory masking. Because of the close reflexive links between the auditory and phonatory systems, it is likely that phonatory stability may be disrupted when auditory feedback is disrupted or altered. However, studies on phonatory stability under auditory masking condition in adult subjects showed that most of the subjects maintained normal levels of phonatory stability. The authors in the earlier investigations suggested that auditory feedback is not the sole contributor to vocal motor control and phonatory stability, a complex neuromuscular reflex system known as kinaesthetic feedback may play a role in controlling phonatory stability when auditory feedback is disrupted or lacking. This proposes the need to further investigate this phenomenon as to whether children show similar patterns of phonatory stability under auditory masking since their neuromotor systems are still at developmental stage, less mature and are less resistant to altered auditory feedback than adults. A total of 40 normal hearing and speaking children (20 male and 20 female) between the age group of 6 and 8 years participated as subjects. The acoustic parameters such as shimmer, jitter and harmonic-to-noise ratio (HNR) were measures and compared between no masking condition (0 dB ML) and masking condition (90 dB ML). Despite the neuromotor systems being less mature in children and less resistant than adults to altered auditory feedback, most of the children in the study demonstrated increased phonatory stability which was reflected by reduced shimmer, jitter and increased HNR values. This study implicates that most of the children demonstrate well established patterns of kinaesthetic feedback, which might have allowed them to maintain normal levels of vocal motor control even in the presence of disturbed auditory feedback. Hence, it can be concluded that children also exhibit kinaesthetic feedback mechanism to control phonatory stability when auditory feedback is disrupted which in turn highlights the importance of kinaesthetic feedback to be included in the therapeutic/intervention approaches for children with hearing and neurogenic speech deficits.

  17. Musicians change their tune: how hearing loss alters the neural code.

    PubMed

    Parbery-Clark, Alexandra; Anderson, Samira; Kraus, Nina

    2013-08-01

    Individuals with sensorineural hearing loss have difficulty understanding speech, especially in background noise. This deficit remains even when audibility is restored through amplification, suggesting that mechanisms beyond a reduction in peripheral sensitivity contribute to the perceptual difficulties associated with hearing loss. Given that normal-hearing musicians have enhanced auditory perceptual skills, including speech-in-noise perception, coupled with heightened subcortical responses to speech, we aimed to determine whether similar advantages could be observed in middle-aged adults with hearing loss. Results indicate that musicians with hearing loss, despite self-perceptions of average performance for understanding speech in noise, have a greater ability to hear in noise relative to nonmusicians. This is accompanied by more robust subcortical encoding of sound (e.g., stimulus-to-response correlations and response consistency) as well as more resilient neural responses to speech in the presence of background noise (e.g., neural timing). Musicians with hearing loss also demonstrate unique neural signatures of spectral encoding relative to nonmusicians: enhanced neural encoding of the speech-sound's fundamental frequency but not of its upper harmonics. This stands in contrast to previous outcomes in normal-hearing musicians, who have enhanced encoding of the harmonics but not the fundamental frequency. Taken together, our data suggest that although hearing loss modifies a musician's spectral encoding of speech, the musician advantage for perceiving speech in noise persists in a hearing-impaired population by adaptively strengthening underlying neural mechanisms for speech-in-noise perception. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Hearing loss is associated with decreased nonverbal intelligence in rural Nepal.

    PubMed

    Emmett, Susan D; Schmitz, Jane; Pillion, Joseph; Wu, Lee; Khatry, Subarna K; Karna, Sureshwar L; LeClerq, Steven C; West, Keith P

    2015-01-01

    To evaluate the association between adolescent and young-adult hearing loss and nonverbal intelligence in rural Nepal. Cross-sectional assessment of hearing loss among a population cohort of adolescents and young adults. Sarlahi District, southern Nepal. Seven hundred sixty-four individuals aged 14 to 23 years. Evaluation of hearing loss, defined by World Health Organization criteria of pure-tone average greater than 25 decibels (0.5, 1, 2, 4 kHz), unilaterally and bilaterally. Nonverbal intelligence, as measured by the Test of Nonverbal Intelligence, 3rd Edition standardized score (mean, 100; standard deviation, 15). Nonverbal intelligence scores differed between participants with normal hearing and those with bilateral (p = 0.04) but not unilateral (p = 0.74) hearing loss. Demographic and socioeconomic factors including male sex; higher caste; literacy; education level; occupation reported as student; and ownership of a bicycle, watch, and latrine were strongly associated with higher nonverbal intelligence scores (all p < 0.001). Subjects with bilateral hearing loss scored an average of 3.16 points lower (95% confidence interval, -5.56 to -0.75; p = 0.01) than subjects with normal hearing after controlling for socioeconomic factors. There was no difference in nonverbal intelligence score based on unilateral hearing loss (0.97; 95% confidence interval, -1.67 to 3.61; p = 0.47). Nonverbal intelligence is adversely affected by bilateral hearing loss even at mild hearing loss levels. Socio economic well-being appears compromised in individuals with lower nonverbal intelligence test scores.

  19. Hearing Loss is Associated with Decreased Nonverbal Intelligence in Rural Nepal

    PubMed Central

    Emmett, Susan D.; Schmitz, Jane; Pillion, Joseph; Wu, Lee; Khatry, Subarna K.; Karna, Sureshwar L.; LeClerq, Steven C.; West, Keith P.

    2014-01-01

    Objective Evaluate the association between adolescent and young adult hearing loss and nonverbal intelligence in rural Nepal Study Design Cross-sectional assessment of hearing loss among a population cohort of adolescents and young adults Setting Sarlahi District, southern Nepal Patients 764 individuals aged 14–23 years Intervention Evaluation of hearing loss, defined by WHO criteria of pure-tone average (PTA) >25 decibels (0.5, 1, 2, 4 kHz), unilaterally and bilaterally Main Outcome Measure Nonverbal intelligence, measured by the Test of Nonverbal Intelligence, 3rd Edition (TONI-3) standardized score (mean 100; standard deviation (SD) 15) Results Nonverbal intelligence scores differed between participants with normal hearing and those with bilateral (p =0.04) but not unilateral (p =0.74) hearing loss. Demographic and socioeconomic factors including male sex, higher caste, literacy, education level, occupation reported as student, and ownership of a bicycle, watch, and latrine were strongly associated with higher nonverbal intelligence scores (all p <0.001). Subjects with bilateral hearing loss scored an average of 3.16 points lower (95% CI: −5.56, −0.75; p =0.01) than subjects with normal hearing after controlling for socioeconomic factors. There was no difference in nonverbal intelligence score based on unilateral hearing loss (0.97; 95% CI: −1.67, 3.61; p =0.47). Conclusions Nonverbal intelligence is adversely affected by bilateral hearing loss, even at mild hearing loss levels. Social and economic well being appear compromised in individuals with lower nonverbal intelligence test scores. PMID:25299832

  20. How age affects memory task performance in clinically normal hearing persons.

    PubMed

    Vercammen, Charlotte; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-05-01

    The main objective of this study is to investigate memory task performance in different age groups, irrespective of hearing status. Data are collected on a short-term memory task (WAIS-III Digit Span forward) and two working memory tasks (WAIS-III Digit Span backward and the Reading Span Test). The tasks are administered to young (20-30 years, n = 56), middle-aged (50-60 years, n = 47), and older participants (70-80 years, n = 16) with normal hearing thresholds. All participants have passed a cognitive screening task (Montreal Cognitive Assessment (MoCA)). Young participants perform significantly better than middle-aged participants, while middle-aged and older participants perform similarly on the three memory tasks. Our data show that older clinically normal hearing persons perform equally well on the memory tasks as middle-aged persons. However, even under optimal conditions of preserved sensory processing, changes in memory performance occur. Based on our data, these changes set in before middle age.

  1. Quality of Life and Hearing Eight Years After Sudden Sensorineural Hearing Loss.

    PubMed

    Härkönen, Kati; Kivekäs, Ilkka; Rautiainen, Markus; Kotti, Voitto; Vasama, Juha-Pekka

    2017-04-01

    To explore long-term hearing results, quality of life (QoL), quality of hearing (QoH), work-related stress, tinnitus, and balance problems after idiopathic sudden sensorineural hearing loss (ISSNHL). Cross-sectional study. We reviewed the audiograms of 680 patients with unilateral ISSNHL on average 8 years after the hearing impairment, and then divided the patients into two study groups based on whether their ISSNHL had recovered to normal (pure tone average [PTA] ≤ 30 dB) or not (PTA > 30 dB). The inclusion criteria were a hearing threshold decrease of 30 dB or more in at least three contiguous frequencies occurring within 72 hours in the affected ear and normal hearing in the contralateral ear. Audiograms of 217 patients fulfilled the criteria. We reviewed their medical records; measured present QoL, QoH, and work-related stress with specific questionnaires; and updated the hearing status. Poor hearing outcome after ISSNHL was correlated with age, severity of hearing loss, and vertigo together with ISSNHL. Quality of life and QoH were statistically significantly better in patients with recovered hearing, and the patients had statistically significantly less tinnitus and balance problems. During the 8-year follow-up, the PTA of the affected ear deteriorated on average 7 dB, and healthy ear deteriorated 6 dB. Idiopathic sudden sensorineural hearing loss that failed to recover had a negative impact on long-term QoL and QoH. The hearing deteriorated as a function of age similarly both in the affected and the healthy ear, and there were no differences between the groups. The cumulative recurrence rate for ISSNHL was 3.5%. 4 Laryngoscope, 127:927-931, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  2. Effects of modulation phase on profile analysis in normal-hearing and hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Rogers, Deanna; Lentz, Jennifer

    2003-04-01

    The ability to discriminate between sounds with different spectral shapes in the presence of amplitude modulation was measured in normal-hearing and hearing-impaired listeners. The standard stimulus was the sum of equal-amplitude modulated tones, and the signal stimulus was generated by increasing the level of half the tones (up components) and decreasing the level of half the tones (down components). The down components had the same modulation phase, and a phase shift was applied to the up components to encourage segregation from the down tones. The same phase shift was used in both standard and signal stimuli. Profile-analysis thresholds were measured as a function of the phase shift between up and down components. The phase shifts were 0, 30, 45, 60, 90, and 180 deg. As expected, thresholds were lowest when all tones had the same modulation phase and increased somewhat with increasing phase disparity. This small increase in thresholds was similar for both groups. These results suggest that hearing-impaired listeners are able to use modulation phase to group sounds in a manner similar to that of normal listeners. [Work supported by NIH (DC 05835).

  3. Validation of the second version of the LittlEARS® Early Speech Production Questionnaire (LEESPQ) in German-speaking children with normal hearing.

    PubMed

    Keilmann, Annerose; Friese, Barbara; Lässig, Anne; Hoffmann, Vanessa

    2018-04-01

    The introduction of neonatal hearing screening and the increasingly early age at which children can receive a cochlear implant has intensified the need for a validated questionnaire to assess the speech production of children aged 0‒18. Such a questionnaire has been created, the LittlEARS ® Early Speech Production Questionnaire (LEESPQ). This study aimed to validate a second, revised edition of the LEESPQ. Questionnaires were returned for 362 children with normal hearing. Completed questionnaires were analysed to determine if the LEESPQ is reliable, prognostically accurate, internally consistent, and if gender or multilingualism affects total scores. Total scores correlated positively with age. The LEESPQ is reliable, accurate, and consistent, and independent of gender or lingual status. A norm curve was created. This second version of the LEESPQ is a valid tool to assess the speech production development of children with normal hearing, aged 0‒18, regardless of their gender. As such, the LEESPQ may be a useful tool to monitor the development of paediatric hearing device users. The second version of the LEESPQ is a valid instrument for assessing early speech production of children aged 0‒18 months.

  4. Cognitive load during speech perception in noise: the influence of age, hearing loss, and cognition on the pupil response.

    PubMed

    Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M

    2011-01-01

    The aim of the present study was to evaluate the influence of age, hearing loss, and cognitive ability on the cognitive processing load during listening to speech presented in noise. Cognitive load was assessed by means of pupillometry (i.e., examination of pupil dilation), supplemented with subjective ratings. Two groups of subjects participated: 38 middle-aged participants (mean age = 55 yrs) with normal hearing and 36 middle-aged participants (mean age = 61 yrs) with hearing loss. Using three Speech Reception Threshold (SRT) in stationary noise tests, we estimated the speech-to-noise ratios (SNRs) required for the correct repetition of 50%, 71%, or 84% of the sentences (SRT50%, SRT71%, and SRT84%, respectively). We examined the pupil response during listening: the peak amplitude, the peak latency, the mean dilation, and the pupil response duration. For each condition, participants rated the experienced listening effort and estimated their performance level. Participants also performed the Text Reception Threshold (TRT) test, a test of processing speed, and a word vocabulary test. Data were compared with previously published data from young participants with normal hearing. Hearing loss was related to relatively poor SRTs, and higher speech intelligibility was associated with lower effort and higher performance ratings. For listeners with normal hearing, increasing age was associated with poorer TRTs and slower processing speed but with larger word vocabulary. A multivariate repeated-measures analysis of variance indicated main effects of group and SNR and an interaction effect between these factors on the pupil response. The peak latency was relatively short and the mean dilation was relatively small at low intelligibility levels for the middle-aged groups, whereas the reverse was observed for high intelligibility levels. The decrease in the pupil response as a function of increasing SNR was relatively small for the listeners with hearing loss. Spearman correlation coefficients indicated that the cognitive load was larger in listeners with better TRT performances as reflected by a longer peak latency (normal-hearing participants, SRT50% condition) and a larger peak amplitude and longer response duration (hearing-impaired participants, SRT50% and SRT84% conditions). Also, a larger word vocabulary was related to longer response duration in the SRT84% condition for the participants with normal hearing. The pupil response systematically increased with decreasing speech intelligibility. Ageing and hearing loss were related to less release from effort when increasing the intelligibility of speech in noise. In difficult listening conditions, these factors may induce cognitive overload relatively early or they may be associated with relatively shallow speech processing. More research is needed to elucidate the underlying mechanisms explaining these results. Better TRTs and larger word vocabulary were related to higher mental processing load across speech intelligibility levels. This indicates that utilizing linguistic ability to improve speech perception is associated with increased listening load.

  5. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    ERIC Educational Resources Information Center

    Kidd, Gerald, Jr.

    2017-01-01

    Purpose: Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This…

  6. Hearing

    ERIC Educational Resources Information Center

    Koehlinger, Keegan M.; Van Horne, Amanda J. Owen; Moeller, Mary Pat

    2013-01-01

    Purpose: Spoken language skills of 3- and 6-year-old children who are hard of hearing (HH) were compared with those of children with normal hearing (NH). Method: Language skills were measured via mean length of utterance in words (MLUw) and percent correct use of finite verb morphology in obligatory contexts based on spontaneous conversational…

  7. The Hearing Environment

    ERIC Educational Resources Information Center

    Capewell, Carmel

    2014-01-01

    Glue ear, a condition resulting in intermittent hearing loss in young children, affects about 80% of young children under seven years old. About 60% of children will spend a third of their time unable to hear within normal thresholds. Teachers are unlikely to consider the sound quality in classrooms. In my research young people provided…

  8. Binaural Loudness Summation in the Hearing Impaired.

    ERIC Educational Resources Information Center

    Hawkins, David B.; And Others

    1987-01-01

    Binaural loudness summation was measured using three different paradigms with 10 normally hearing and 20 bilaterally symmetrical high-frequency sensorineural hearing loss subjects. Binaural summation increased with presentation level using the loudness matching procedure, with values in the 6-10 dB range. Summation decreased with level using the…

  9. The Ling 6(HL) test: typical pediatric performance data and clinical use evaluation.

    PubMed

    Glista, Danielle; Scollie, Susan; Moodie, Sheila; Easwar, Vijayalakshmi

    2014-01-01

    The Ling 6(HL) test offers a calibrated version of naturally produced speech sounds in dB HL for evaluation of detection thresholds. Aided performance has been previously characterized in adults. The purpose of this work was to evaluate and refine the Ling 6(HL) test for use in pediatric hearing aid outcome measurement. This work is presented across two studies incorporating an integrated knowledge translation approach in the characterization of normative and typical performance, and in the evaluation of clinical feasibility, utility, acceptability, and implementation. A total of 57 children, 28 normally hearing and 29 with binaural sensorineural hearing loss, were included in Study 1. Children wore their own hearing aids fitted using Desired Sensation Level v5.0. Nine clinicians from The Network of Pediatric Audiologists participated in Study 2. A CD-based test format was used in the collection of unaided and aided detection thresholds in laboratory and clinical settings; thresholds were measured clinically as part of routine clinical care. Confidence intervals were derived to characterize normal performance and typical aided performance according to hearing loss severity. Unaided-aided performance was analyzed using a repeated-measures analysis of variance. The audiologists completed an online questionnaire evaluating the quality, feasibility/executability, utility/comparative value/relative advantage, acceptability/applicability, and interpretability, in addition to recommendation and general comments sections. Ling 6(HL) thresholds were reliably measured with children 3-18 yr old. Normative and typical performance ranges were translated into a scoring tool for use in pediatric outcome measurement. In general, questionnaire respondents generally agreed that the Ling 6(HL) test was a high-quality outcome evaluation tool that can be implemented successfully in clinical settings. By actively collaborating with pediatric audiologists and using an integrated knowledge translation framework, this work supported the creation of an evidence-based clinical tool that has the potential to be implemented in, and useful to, clinical practice. More research is needed to characterize performance in alternative listening conditions to facilitate use with infants, for example. Future efforts focused on monitoring the use of the Ling 6(HL) test in daily clinical practice may help describe whether clinical use has been maintained across time and if any additional adaptations are necessary to facilitate clinical uptake. American Academy of Audiology.

  10. Alternative Splice Forms Influence Functions of Whirlin in Mechanosensory Hair Cell Stereocilia.

    PubMed

    Ebrahim, Seham; Ingham, Neil J; Lewis, Morag A; Rogers, Michael J C; Cui, Runjia; Kachar, Bechara; Pass, Johanna C; Steel, Karen P

    2016-05-03

    WHRN (DFNB31) mutations cause diverse hearing disorders: profound deafness (DFNB31) or variable hearing loss in Usher syndrome type II. The known role of WHRN in stereocilia elongation does not explain these different pathophysiologies. Using spontaneous and targeted Whrn mutants, we show that the major long (WHRN-L) and short (WHRN-S) isoforms of WHRN have distinct localizations within stereocilia and also across hair cell types. Lack of both isoforms causes abnormally short stereocilia and profound deafness and vestibular dysfunction. WHRN-S expression, however, is sufficient to maintain stereocilia bundle morphology and function in a subset of hair cells, resulting in some auditory response and no overt vestibular dysfunction. WHRN-S interacts with EPS8, and both are required at stereocilia tips for normal length regulation. WHRN-L localizes midway along the shorter stereocilia, at the level of inter-stereociliary links. We propose that differential isoform expression underlies the variable auditory and vestibular phenotypes associated with WHRN mutations. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. The Developmental Trajectory of Spatial Listening Skills in Normal-Hearing Children

    ERIC Educational Resources Information Center

    Lovett, Rosemary Elizabeth Susan; Kitterick, Padraig Thomas; Huang, Shan; Summerfield, Arthur Quentin

    2012-01-01

    Purpose: To establish the age at which children can complete tests of spatial listening and to measure the normative relationship between age and performance. Method: Fifty-six normal-hearing children, ages 1.5-7.9 years, attempted tests of the ability to discriminate a sound source on the left from one on the right, to localize a source, to track…

  12. Word Learning in Deaf Children with Cochlear Implants: Effects of Early Auditory Experience

    ERIC Educational Resources Information Center

    Houston, Derek M.; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T.

    2012-01-01

    Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this…

  13. Auditory Brainstem Response Thresholds to Air- and Bone-Conducted CE-Chirps in Neonates and Adults

    ERIC Educational Resources Information Center

    Cobb, Kensi M.; Stuart, Andrew

    2016-01-01

    Purpose The purpose of this study was to compare auditory brainstem response (ABR) thresholds to air- and bone-conducted CE-Chirps in neonates and adults. Method Thirty-two neonates with no physical or neurologic challenges and 20 adults with normal hearing participated. ABRs were acquired with a starting intensity of 30 dB normal hearing level…

  14. Visual Attention in Deaf and Normal Hearing Adults: Effects of Stimulus Compatibility

    ERIC Educational Resources Information Center

    Sladen, Douglas P.; Tharpe, Anne Marie; Ashmead, Daniel H.; Grantham, D. Wesley; Chun, Marvin M.

    2005-01-01

    Visual perceptual skills of deaf and normal hearing adults were measured using the Eriksen flanker task. Participants were seated in front of a computer screen while a series of target letters flanked by similar or dissimilar letters was flashed in front of them. Participants were instructed to press one button when they saw an "H," and another…

  15. Responses to Targets in the Visual Periphery in Deaf and Normal-Hearing Adults

    ERIC Educational Resources Information Center

    Rothpletz, Ann M.; Ashmead, Daniel H.; Tharpe, Anne Marie

    2003-01-01

    The purpose of this study was to compare the response times of deaf and normal-hearing individuals to the onset of target events in the visual periphery in distracting and nondistracting conditions. Visual reaction times to peripheral targets placed at 3 eccentricities to the left and right of a center fixation point were measured in prelingually…

  16. Consonant Cluster Production in Children with Cochlear Implants: A Comparison with Normally Hearing Peers

    ERIC Educational Resources Information Center

    Faes, Jolien; Gillis, Steven

    2017-01-01

    In early word productions, the same types of errors are manifest in children with cochlear implants (CI) as in their normally hearing (NH) peers with respect to consonant clusters. However, the incidence of those types and their longitudinal development have not been examined or quantified in the literature thus far. Furthermore, studies on the…

  17. Response Errors in Females' and Males' Sentence Lipreading Necessitate Structurally Different Models for Predicting Lipreading Accuracy

    ERIC Educational Resources Information Center

    Bernstein, Lynne E.

    2018-01-01

    Lipreaders recognize words with phonetically impoverished stimuli, an ability that varies widely in normal-hearing adults. Lipreading accuracy for sentence stimuli was modeled with data from 339 normal-hearing adults. Models used measures of phonemic perceptual errors, insertion of text that was not in the stimulus, gender, and auditory speech…

  18. Voice gender identification by cochlear implant users: The role of spectral and temporal resolution

    NASA Astrophysics Data System (ADS)

    Fu, Qian-Jie; Chinchilla, Sherol; Nogaki, Geraldine; Galvin, John J.

    2005-09-01

    The present study explored the relative contributions of spectral and temporal information to voice gender identification by cochlear implant users and normal-hearing subjects. Cochlear implant listeners were tested using their everyday speech processors, while normal-hearing subjects were tested under speech processing conditions that simulated various degrees of spectral resolution, temporal resolution, and spectral mismatch. Voice gender identification was tested for two talker sets. In Talker Set 1, the mean fundamental frequency values of the male and female talkers differed by 100 Hz while in Talker Set 2, the mean values differed by 10 Hz. Cochlear implant listeners achieved higher levels of performance with Talker Set 1, while performance was significantly reduced for Talker Set 2. For normal-hearing listeners, performance was significantly affected by the spectral resolution, for both Talker Sets. With matched speech, temporal cues contributed to voice gender identification only for Talker Set 1 while spectral mismatch significantly reduced performance for both Talker Sets. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to 4-8 spectral channels. The results suggest that, because of the reduced spectral resolution, cochlear implant patients may attend strongly to periodicity cues to distinguish voice gender.

  19. Binaural Fusion and Listening Effort in Children Who Use Bilateral Cochlear Implants: A Psychoacoustic and Pupillometric Study

    PubMed Central

    Steel, Morrison M.; Papsin, Blake C.; Gordon, Karen A.

    2015-01-01

    Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing. PMID:25668423

  20. Tinnitus and other auditory problems - occupational noise exposure below risk limits may cause inner ear dysfunction.

    PubMed

    Lindblad, Ann-Cathrine; Rosenhall, Ulf; Olofsson, Åke; Hagerman, Björn

    2014-01-01

    The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: - pure tone audiometry with Békésy technique, - transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; - psychoacoustical modulation transfer function, - forward masking, - speech recognition in noise, - tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise below risk levels, had dysfunctions almost identical to those of the more exposed Industry group.

  1. Tinnitus and Other Auditory Problems – Occupational Noise Exposure below Risk Limits May Cause Inner Ear Dysfunction

    PubMed Central

    Lindblad, Ann-Cathrine; Rosenhall, Ulf; Olofsson, Åke; Hagerman, Björn

    2014-01-01

    The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: − pure tone audiometry with Békésy technique, − transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; − psychoacoustical modulation transfer function, − forward masking, − speech recognition in noise, − tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise below risk levels, had dysfunctions almost identical to those of the more exposed Industry group. PMID:24827149

  2. Auditory Perceptual Learning in Adults with and without Age-Related Hearing Loss

    PubMed Central

    Karawani, Hanin; Bitan, Tali; Attias, Joseph; Banai, Karen

    2016-01-01

    Introduction : Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL). Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL. Methods : Fifty-six listeners (60–72 y/o), 35 participants with ARHL, and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training, and no-training group). Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1) Speech-in-noise, (2) time compressed speech, and (3) competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results : Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions : ARHL did not preclude auditory perceptual learning but there was little generalization to untrained conditions. We suggest that most training-related changes occurred at higher level task-specific cognitive processes in both groups. However, these were enhanced by high quality perceptual representations in the normal-hearing group. In contrast, some training-related changes have also occurred at the level of phonemic representations in the ARHL group, consistent with an interaction between bottom-up and top-down processes. PMID:26869944

  3. High-frequency hearing impairment assessed with cochlear microphonics.

    PubMed

    Zhang, Ming

    2012-09-01

    Cochlear microphonic (CM) measurements may potentially become a supplementary approach to otoacoustic emission (OAE) measurements for assessing low-frequency cochlear functions in the clinic. The objective of this study was to investigate the measurement of CMs in subjects with high-frequency hearing loss. Currently, CMs can be measured using electrocochleography (ECochG or ECoG) techniques. Both CMs and OAEs are cochlear responses, while auditory brainstem responses (ABRs) are not. However, there are inherent limitations associated with OAE measurements such as acoustic noise, which can conceal low-frequency OAEs measured in the clinic. However, CM measurements may not have these limitations. CMs were measured in human subjects using an ear canal electrode. The CMs were compared between the high-frequency hearing loss group and the normal-hearing control group. Distortion product OAEs (DPOAEs) and audiogram were also measured. The DPOAE and audiogram measurements indicate that the subjects were correctly selected for the two groups. Low-frequency CM waveforms (CMWs) can be measured using ear canal electrodes in high-frequency hearing loss subjects. The difference in amplitudes of CMWs between the high-frequency hearing loss group and the normal-hearing group is insignificant at low frequencies but significant at high frequencies.

  4. Association of Hearing Impairment With Incident Frailty and Falls in Older Adults

    PubMed Central

    Kamil, Rebecca J.; Betz, Joshua; Powers, Becky Brott; Pratt, Sheila; Kritchevsky, Stephen; Ayonayon, Hilsa N.; Harris, Tammy B.; Helzner, Elizabeth; Deal, Jennifer A.; Martin, Kathryn; Peterson, Matthew; Satterfield, Suzanne; Simonsick, Eleanor M.; Lin, Frank R.

    2017-01-01

    Objective We aimed to determine whether hearing impairment (HI) in older adults is associated with the development of frailty and falls. Method Longitudinal analysis of observational data from the Health, Aging and Body Composition study of 2,000 participants aged 70 to 79 was conducted. Hearing was defined by the pure-tone-average of hearing thresholds at 0.5, 1, 2, and 4 kHz in the better hearing ear. Frailty was defined as a gait speed of <0.60 m/s and/or inability to rise from a chair without using arms. Falls were assessed annually by self-report. Results Older adults with moderate-or-greater HI had a 63% increased risk of developing frailty (adjusted hazard ratio [HR] = 1.63, 95% confidence interval [CI] = [1.26, 2.12]) compared with normal-hearing individuals. Moderate-or-greater HI was significantly associated with a greater annual percent increase in odds of falling over time (9.7%, 95% CI = [7.0, 12.4] compared with normal hearing, 4.4%, 95% CI = [2.6, 6.2]). Discussion HI is independently associated with the risk of frailty in older adults and with greater odds of falling over time. PMID:26438083

  5. Investigation of Psychophysiological and Subjective Effects of Long Working Hours – Do Age and Hearing Impairment Matter?

    PubMed Central

    Wagner-Hartl, Verena; Kallus, K. Wolfgang

    2018-01-01

    Following current prognosis, demographic development raises expectations of an aging of the working population. Therefore, keeping employees healthy and strengthening their ability to work, becomes more and more important. When employees become older, dealing with age-related impairments of sensory functions, such as hearing impairment, is a central issue. Recent evidence suggests that negative effects that are associated with reduced hearing can have a strong impact at work. Especially under exhausting working situations such as working overtime hours, age and hearing impairment might influence employees’ well-being. Until now, neither the problem of aged workers and long working hours, nor the problem of hearing impairment and prolonged working time has been addressed explicitly. Therefore, a laboratory study was examined to answer the research question: Do age and hearing impairment have an impact on psychophysiological and subjective effects of long working hours. In total, 51 white-collar workers, aged between 24 and 63 years, participated in the laboratory study. The results show no significant effects for age and hearing impairment on the intensity of subjective consequences (perceived recovery and fatigue, subjective emotional well-being and physical symptoms) of long working hours. However, the psychophysiological response (the saliva cortisol level) to long working hours differs significantly between hearing impaired and normal hearing employees. Interestingly, the results suggest that from a psychophysiological point of view long working hours were more demanding for normal hearing employees. PMID:29379452

  6. Sensorineural Hearing Impairment and Subclinical Atherosclerosis in Rheumatoid Arthritis Patients Without Traditional Cardiovascular Risk Factors

    PubMed Central

    MACIAS-REYES, Hector; DURAN-BARRAGAN, Sergio; CARDENAS-CONTRERAS, Cynthia R.; CHAVEZ-MARTIN, Cesar G.; GOMEZ-BAÑUELOS, Eduardo; NAVARRO-HERNANDEZ, Rosa E.; YANOWSKY-GONZALEZ, Carlos O.; GONZALEZ-LOPEZ, Laura; GAMEZ-NAVA, Jorge I.

    2016-01-01

    Objectives This study aims to evaluate the association of hearing impairment with carotid intima-media thickness and subclinical atherosclerosis in rheumatoid arthritis (RA) patients. Patients and methods A total of 41 RA patients (2 males, 39 females; mean age 46.5±10.2 years; range 20 to 63 years) with no known traditional cardiovascular risk factors were included. Routine clinical and laboratory assessments for RA patients were performed. Pure tone air (250-8000 Hz) and bone conduction (250-6000 Hz) thresholds were obtained, tympanograms and impedance audiometry were conducted. Sensorineural hearing impairment was defined if the average thresholds were ≥25 decibels. Carotid intima-media thickness was assessed and classified with a cut-off point of 0.6 mm. Results Thirteen patients (31.7%) had normal audition, while 28 (68.3%) had hearing impairment. Of these, 22 had bilateral sensorineural hearing impairment. Four patients had conductive hearing impairment (right in three patients and left in one patient). Patients with sensorineural hearing impairment had increased carotid intima-media thickness in the media segment of carotid common artery compared to patients with normal hearing (right ear p=0.007; left ear p=0.075). Thickening of the carotid intima-media thickness was associated with sensorineural hearing impairment in RA patients. Conclusion Rheumatoid arthritis patients should be evaluated by carotid intima-media thickness as a possible contributing factor of hearing impairment in patients without cardiovascular risk factors. PMID:29900940

  7. Investigation of Psychophysiological and Subjective Effects of Long Working Hours - Do Age and Hearing Impairment Matter?

    PubMed

    Wagner-Hartl, Verena; Kallus, K Wolfgang

    2017-01-01

    Following current prognosis, demographic development raises expectations of an aging of the working population. Therefore, keeping employees healthy and strengthening their ability to work, becomes more and more important. When employees become older, dealing with age-related impairments of sensory functions, such as hearing impairment, is a central issue. Recent evidence suggests that negative effects that are associated with reduced hearing can have a strong impact at work. Especially under exhausting working situations such as working overtime hours, age and hearing impairment might influence employees' well-being. Until now, neither the problem of aged workers and long working hours, nor the problem of hearing impairment and prolonged working time has been addressed explicitly. Therefore, a laboratory study was examined to answer the research question: Do age and hearing impairment have an impact on psychophysiological and subjective effects of long working hours. In total, 51 white-collar workers, aged between 24 and 63 years, participated in the laboratory study. The results show no significant effects for age and hearing impairment on the intensity of subjective consequences (perceived recovery and fatigue, subjective emotional well-being and physical symptoms) of long working hours. However, the psychophysiological response (the saliva cortisol level) to long working hours differs significantly between hearing impaired and normal hearing employees. Interestingly, the results suggest that from a psychophysiological point of view long working hours were more demanding for normal hearing employees.

  8. Clinical Application and Psychometric Properties of a Norwegian Questionnaire for the Self-Assessment of Communication in Quiet and Adverse Conditions Using Two Revised APHAB Subscales.

    PubMed

    Heggdal, Peder O Laugen; Nordvik, Øyvind; Brännström, Jonas; Vassbotn, Flemming; Aarstad, Anne Kari; Aarstad, Hans Jørgen

    2018-01-01

    Difficulty in following and understanding conversation in different daily life situations is a common complaint among persons with hearing loss. To the best of our knowledge, there is currently no published validated Norwegian questionnaire available that allows for a self-assessment of unaided communication ability in a population with hearing loss. The aims of the present study were to investigate a questionnaire for the self-assessment of communication ability, examine the psychometric properties of this questionnaire, and explore how demographic variables such as degree of hearing loss, age, and sex influence response patterns. A questionnaire based on the subscales of the Norwegian translation of the Abbreviated Profile of Hearing Aid Benefit was applied to a group of hearing aid users and normal-hearing controls. A total of 108 patients with bilateral hearing loss, and 101 controls with self-reported normal hearing. The psychometric properties were evaluated. Associations and differences between outcome scores and descriptive variables were examined. A regression analysis was performed to investigate whether descriptive variables could predict outcome. The measures of reliability suggest that the questionnaire has satisfactory psychometric properties, with the outcome of the questionnaire correlating to hearing loss severity, thus indicating that the concurrent validity of the questionnaire is good. The findings indicate that the proposed questionnaire is a valid measure of self-assessed communication ability in both quiet and adverse listening conditions in participants with and without hearing loss. American Academy of Audiology

  9. Role of Visual Speech in Phonological Processing by Children with Hearing Loss

    ERIC Educational Resources Information Center

    Jerger, Susan; Tye-Murray, Nancy; Abdi, Herve

    2009-01-01

    Purpose: This research assessed the influence of visual speech on phonological processing by children with hearing loss (HL). Method: Children with HL and children with normal hearing (NH) named pictures while attempting to ignore auditory or audiovisual speech distractors whose onsets relative to the pictures were either congruent, conflicting in…

  10. Production of Sentence-Final Intonation Contours by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Allen, George D.; Arndorfer, Patricia M.

    2000-01-01

    This study compared the relationship between acoustic parameters and listeners' perceptions of intonation contours produced by 12 children (ages 7-14) either with severe-to-profound hearing impairments (HI) or normal-hearing (NH). The HI children's productions were generally similar to the NH children in that they used fundamental frequency,…

  11. 76 FR 66734 - National Institute on Deafness and Other Communication Disorders Draft 2012-2016 Strategic Plan

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-27

    ... areas of hearing and balance; smell and taste; and voice, speech, and language. The Strategic Plan... research training in the normal and disordered processes of hearing, balance, smell, taste, voice, speech... into three program areas: Hearing and balance; smell and taste; and voice, speech, and language. The...

  12. Vocabulary Facilitates Speech Perception in Children with Hearing Aids

    ERIC Educational Resources Information Center

    Klein, Kelsey E.; Walker, Elizabeth A.; Kirby, Benjamin; McCreery, Ryan W.

    2017-01-01

    Purpose: We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs. Method: Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5-12…

  13. Speech Recognition in Fluctuating and Continuous Maskers: Effects of Hearing Loss and Presentation Level.

    ERIC Educational Resources Information Center

    Summers, Van; Molis, Michelle R.

    2004-01-01

    Listeners with normal-hearing sensitivity recognize speech more accurately in the presence of fluctuating background sounds, such as a single competing voice, than in unmodulated noise at the same overall level. These performance differences ore greatly reduced in listeners with hearing impairment, who generally receive little benefit from…

  14. Hearing Loss Severity: Impaired Processing of Formant Transition Duration

    ERIC Educational Resources Information Center

    Coez, A.; Belin, P.; Bizaguet, E.; Ferrary, E.; Zilbovicius, M.; Samson, Y.

    2010-01-01

    Normal hearing listeners exploit the formant transition (FT) detection to identify place of articulation for stop consonants. Neuro-imaging studies revealed that short FT induced less cortical activation than long FT. To determine the ability of hearing impaired listeners to distinguish short and long formant transitions (FT) from vowels of the…

  15. Fathers' Involvement in Preschool Programs for Children with and without Hearing Loss

    ERIC Educational Resources Information Center

    Ingber, Sara; Most, Tova

    2012-01-01

    The authors compared the involvement in children's development and education of 38 fathers of preschoolers with hearing loss to the involvement of a matched group of 36 fathers of preschoolers with normal hearing, examining correlations between child, father, and family characteristics. Fathers completed self-reports regarding their parental…

  16. Binaural Interference and the Effects of Age and Hearing Loss.

    PubMed

    Mussoi, Bruna S S; Bentler, Ruth A

    2017-01-01

    The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss. A cross-sectional study. Thirty-three participants with symmetric thresholds were recruited from the University of Iowa community. Participants were grouped as follows: younger with normal hearing (18-28 yr, n = 12), older with normal hearing for their age (73-87 yr, n = 9), and older with hearing loss (78-94 yr, n = 12). Prior noise exposure was ruled out. The Connected Speech Test (CST) and Hearing in Noise Test (HINT) were administered to all participants bilaterally, and to each ear separately. Test materials were presented in the sound field with speech at 0° azimuth and the noise at 180°. The Dichotic Digits Test (DDT) was administered to all participants through earphones. Hearing aids were not used during testing. Group results were compared with repeated measures and one-way analysis of variances, as appropriate. Within-subject analyses using pre-established critical differences for each test were also performed. The HINT revealed no effect of condition (individual ear versus bilateral presentation) using group analysis, although within-subject analysis showed that 27% of the participants had binaural interference (18% had binaural advantage). On the CST, there was significant binaural advantage across all groups with group data analysis, as well as for 12% of the participants at each of the two signal-to-babble ratios (SBRs) tested. One participant had binaural interference at each SBR. Finally, on the DDT, a significant right-ear advantage was found with group data, and for at least some participants. Regarding age effects, more participants in the pooled elderly groups had binaural interference (33.3%) than in the younger group (16.7%), on the HINT. The presence of hearing loss yielded overall lower scores, but none of the comparisons between bilateral and unilateral performance were affected by hearing loss. Results of within-subject analyses on the HINT agree with previous findings of binaural interference in ≥17% of listeners. Across all groups, a significant right-ear advantage was also seen on the DDT. HINT results support the notion that the prevalence of binaural interference is likely higher in the elderly population. Hearing loss, however, did not affect the differences between bilateral and better unilateral scores. The possibility of binaural interference should be considered when fitting hearing aids to listeners with symmetric hearing loss. Comparing bilateral to unilateral (unaided) performance on tests such as the HINT may provide the clinician with objective data to support subjective preference for one hearing aid as opposed to two. American Academy of Audiology

  17. [Research on activity evolution of cerebral cortex and hearing rehabilitation of congenitally deaf children after cochlear implant].

    PubMed

    Wang, X J; Liang, M J; Zhang, J P; Huang, H; Zheng, Y Q

    2017-11-05

    Objective: There is a significant difference in the hearing rehabilitation between the congenitally deaf children after cochlear implant(CI). The intrinsic mechanism that affects the hearing rehabilitation in patients was discussed from the perspective of evoked EEG source activity. Method: Firstly, we collected the ERP data from 23 patients and 10 control group children during 0, 3, 6, 9 and 12 months after CI. According to the hearing rehabilitation during 12 months after CI, the patients were divided into two groups: rehabilitation of "the good" and "the poor". Then we used sLORETA to show the changes in the groups of patients' cerebral cortex and compared with the control group. Result: Cross-modal reorganization of cerebral cortex exists in the congenitally deaf children. The cross-modal reorganization gradually degraded and the activity of the relevant cortex followed by normally after CI. There was a statistically significant difference( P < 0.05) in the temporal lobe and the associated cortex around parietal lobe between "the good" and "the poor" groups after 12 months. Conclusion: The normalization of the cross-modal reorganization in patients reflects the hearing rehabilitation after CI, especially the normalization of the activity of the temporal lobe and the associated cortex around parietal lobe, which influences the rehabilitation effect of the auditory function to some extent. This research demonstrated the detection of the mechanism has important significance for the hearing recovery training and evaluation of the hearing rehabilitation after CI. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  18. Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear

    PubMed Central

    Gifford, René H.; Grantham, D. Wesley; Sheffield, Sterling W.; Davis, Timothy J.; Dwyer, Robert; Dorman, Michael F.

    2014-01-01

    The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from −90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100–900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. PMID:24607490

  19. Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear.

    PubMed

    Gifford, René H; Grantham, D Wesley; Sheffield, Sterling W; Davis, Timothy J; Dwyer, Robert; Dorman, Michael F

    2014-06-01

    The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. Copyright © 2014. Published by Elsevier B.V.

  20. Assessing the Importance of Lexical Tone Contour to Sentence Perception in Mandarin-Speaking Children with Normal Hearing

    ERIC Educational Resources Information Center

    Zhu, Shufeng; Wong, Lena L. N.; Wang, Bin; Chen, Fei

    2017-01-01

    Purpose: The aim of the present study was to evaluate the influence of lexical tone contour and age on sentence perception in quiet and in noise conditions in Mandarin-speaking children ages 7 to 11 years with normal hearing. Method: Test materials were synthesized Mandarin sentences, each word with a manipulated lexical contour, that is, normal…

  1. Dichotic Listening and Otoacoustic Emissions: Shared Variance between Cochlear Function and Dichotic Listening Performance in Adults with Normal Hearing

    ERIC Educational Resources Information Center

    Markevych, Vladlena; Asbjornsen, Arve E.; Lind, Ola; Plante, Elena; Cone, Barbara

    2011-01-01

    The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a…

  2. An Acoustic Analysis of the Vowel Space in Young and Old Cochlear-Implant Speakers

    ERIC Educational Resources Information Center

    Neumeyer, Veronika; Harrington, Jonathan; Draxler, Christoph

    2010-01-01

    The main purpose of this study was to compare acoustically the vowel spaces of two groups of cochlear implantees (CI) with two age-matched normal hearing groups. Five young test persons (15-25 years) and five older test persons (55-70 years) with CI and two control groups of the same age with normal hearing were recorded. The speech material…

  3. Perception of Speech Produced by Native and Nonnative Talkers by Listeners with Normal Hearing and Listeners with Cochlear Implants

    ERIC Educational Resources Information Center

    Ji, Caili; Galvin, John J.; Chang, Yi-ping; Xu, Anting; Fu, Qian-Jie

    2014-01-01

    Purpose: The aim of this study was to evaluate the understanding of English sentences produced by native (English) and nonnative (Spanish) talkers by listeners with normal hearing (NH) and listeners with cochlear implants (CIs). Method: Sentence recognition in noise was measured in adult subjects with CIs and subjects with NH, all of whom were…

  4. Central auditory processing effects induced by solvent exposure.

    PubMed

    Fuente, Adrian; McPherson, Bradley

    2007-01-01

    Various studies have demonstrated that organic solvent exposure may induce auditory damage. Studies conducted in workers occupationally exposed to solvents suggest, on the one hand, poorer hearing thresholds than in matched non-exposed workers, and on the other hand, central auditory damage due to solvent exposure. Taking into account the potential auditory damage induced by solvent exposure due to the neurotoxic properties of such substances, the present research aimed at studying the possible auditory processing disorder (APD), and possible hearing difficulties in daily life listening situations that solvent-exposed workers may acquire. Fifty workers exposed to a mixture of organic solvents (xylene, toluene, methyl ethyl ketone) and 50 non-exposed workers matched by age, gender and education were assessed. Only subjects with no history of ear infections, high blood pressure, kidney failure, metabolic and neurological diseases, or alcoholism were selected. The subjects had either normal hearing or sensorineural hearing loss, and normal tympanometric results. Hearing-in-noise (HINT), dichotic digit (DD), filtered speech (FS), pitch pattern sequence (PPS), and random gap detection (RGD) tests were carried out in the exposed and non-exposed groups. A self-report inventory of each subject's performance in daily life listening situations, the Amsterdam Inventory for Auditory Disability and Handicap, was also administered. Significant threshold differences between exposed and non-exposed workers were found at some of the hearing test frequencies, for both ears. However, exposed workers still presented normal hearing thresholds as a group (equal or better than 20 dB HL). Also, for the HINT, DD, PPS, FS and RGD tests, non-exposed workers obtained better results than exposed workers. Finally, solvent-exposed workers reported significantly more hearing complaints in daily life listening situations than non-exposed workers. It is concluded that subjects exposed to solvents may acquire an APD and thus the sole use of pure-tone audiometry is insufficient to assess hearing in solvent-exposed populations.

  5. Vibrotactile Presentation of Musical Notes to the Glabrous Skin for Adults with Normal Hearing or a Hearing Impairment: Thresholds, Dynamic Range and High-Frequency Perception

    PubMed Central

    Maté-Cid, Saúl; Fulford, Robert; Seiffert, Gary; Ginsborg, Jane

    2016-01-01

    Presentation of music as vibration to the skin has the potential to facilitate interaction between musicians with hearing impairments and other musicians during group performance. Vibrotactile thresholds have been determined to assess the potential for vibrotactile presentation of music to the glabrous skin of the fingertip, forefoot and heel. No significant differences were found between the thresholds for sinusoids representing notes between C1 and C6 when presented to the fingertip of participants with normal hearing and with a severe or profound hearing loss. For participants with normal hearing, thresholds for notes between C1 and C6 showed the characteristic U-shape curve for the fingertip, but not for the forefoot and heel. Compared to the fingertip, the forefoot had lower thresholds between C1 and C3, and the heel had lower thresholds between C1 and G2; this is attributed to spatial summation from the Pacinian receptors over the larger contactor area used for the forefoot and heel. Participants with normal hearing assessed the perception of high-frequency vibration using 1s sinusoids presented to the fingertip and were found to be more aware of transient vibration at the beginning and/or end of notes between G4 and C6 when stimuli were presented 10dB above threshold, rather than at threshold. An average of 94% of these participants reported feeling continuous vibration between G4 and G5 with stimuli presented 10dB above threshold. Based on the experimental findings and consideration of health effects relating to vibration exposure, a suitable range of notes for vibrotactile presentation of music is identified as being from C1 to G5. This is more limited than for human hearing but the fundamental frequencies of the human voice, and the notes played by many instruments, lie within it. However, the dynamic range might require compression to avoid the negative effects of amplitude on pitch perception. PMID:27191400

  6. Evaluation of family history of permanent hearing loss in childhood as a risk indicator in universal screening.

    PubMed

    Valido Quintana, Mercedes; Oviedo Santos, Ángeles; Borkoski Barreiro, Silvia; Santana Rodríguez, Alfredo; Ramos Macías, Ángel

    Sixty percent of prelingual hearing loss is of genetic origin. A family history of permanent childhood hearing loss is a risk factor. The objective of the study is to determine the relationship between this risk factor and hearing loss. We have evaluated clinical and epidemiological characteristics and related nonsyndromic genetic variation. This was a retrospective, descriptive and observational study of newborns between January 2007 and December 2010 with family history as risk factor for hearing loss using transient evoked otoacoustic emissions and auditory brainstem response. A total of 26,717 children were born. Eight hundred and fifty-seven (3.2%) had family history. Fifty-seven(0.21%) failed to pass the second test. A percentage of 29.1 (n=16) had another risk factor, and 17.8% (n=9) had no classical risk factor. No risk factor was related to the hearing loss except heart disease. Seventy-six point four percent had normal hearing and 23.6% hearing loss. The mean of family members with hearing loss was 1.25. On genetic testing, 82.86% of homozygotes was normal, 11.43% heterozygosity in Connexin 26 gene (35delG), 2.86% R143W heterozygosity in the same gene and 2.86% mutant homozygotes (35delG). We found no relationship between hearing loss and mutated allele. The percentage of children with a family history and hearing loss is higher than expected in the general population. The genetic profile requires updating to clarify the relationship between hearing loss and heart disease, family history and the low prevalence in the mutations analyzed. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  7. Factors associated with Hearing Loss in a Normal-Hearing Guinea Pig Model of Hybrid Cochlear Implants

    PubMed Central

    Tanaka, Chiemi; Nguyen-Huynh, Anh; Loera, Katherine; Stark, Gemaine; Reiss, Lina

    2014-01-01

    The Hybrid cochlear implant (CI), also known as Electro- Acoustic Stimulation (EAS), is a new type of CI that preserves residual acoustic hearing and enables combined cochlear implant and hearing aid use in the same ear. However, 30-55% of patients experience acoustic hearing loss within days to months after activation, suggesting that both surgical trauma and electrical stimulation may cause hearing loss. The goals of this study were to: 1) determine the contributions of both implantation surgery and EAS to hearing loss in a normal-hearing guinea pig model; 2) determine which cochlear structural changes are associated with hearing loss after surgery and EAS. Two groups of animals were implanted (n=6 per group), with one group receiving chronic acoustic and electric stimulation for 10 weeks, and the other group receiving no direct acoustic or electric stimulation during this time frame. A third group (n=6) was not implanted, but received chronic acoustic stimulation. Auditory brainstem response thresholds were followed over time at 1, 2, 6, and 16 kHz. At the end of the study, the following cochlear measures were quantified: hair cells, spiral ganglion neuron density, fibrous tissue density, and stria vascularis blood vessel density; the presence or absence of ossification around the electrode entry was also noted. After surgery, implanted animals experienced a range of 0-55 dB of threshold shifts in the vicinity of the electrode at 6 and 16 kHz. The degree of hearing loss was significantly correlated with reduced stria vascularis vessel density and with the presence of ossification, but not with hair cell counts, spiral ganglion neuron density, or fibrosis area. After 10 weeks of stimulation, 67% of implanted, stimulated animals had more than 10 dB of additional threshold shift at 1 kHz, compared to 17% of implanted, non-stimulated animals and 0% of non-implanted animals. This 1-kHz hearing loss was not associated with changes in any of the cochlear measures quantified in this study. The variation in hearing loss after surgery and electrical stimulation in this animal model is consistent with the variation in human patients. Further, these findings illustrate an advantage of a normal-hearing animal model for quantification of hearing loss and damage to cochlear structures without the confounding effects of chemical- or noise-induced hearing loss. Finally, this study is the first to suggest a role of the stria vascularis and damage to the lateral wall in implantation-induced hearing loss. Further work is needed to determine the mechanisms of implantation- and electrical-stimulation-induced hearing loss. PMID:25128626

  8. How to make negotiation work

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willms, J.R.

    The siting of MSW facilities often generates controversy. Environmental negotiation can be a useful method for defusing controversy while allowing community leaders to maintain control of a proposed project. The environmental hearing process is described. The paper than discusses advantages and disadvantages in the context of MSW management and improving the hearing process.

  9. Auditory phonological priming in children and adults during word repetition

    NASA Astrophysics Data System (ADS)

    Cleary, Miranda; Schwartz, Richard G.

    2004-05-01

    Short-term auditory phonological priming effects involve changes in the speed with which words are processed by a listener as a function of recent exposure to other similar-sounding words. Activation of phonological/lexical representations appears to persist beyond the immediate offset of a word, influencing subsequent processing. Priming effects are commonly cited as demonstrating concurrent activation of word/phonological candidates during word identification. Phonological priming is controversial, the direction of effects (facilitating versus slowing) varying with the prime-target relationship. In adults, it has repeatedly been demonstrated, however, that hearing a prime word that rhymes with the following target word (ISI=50 ms) decreases the time necessary to initiate repetition of the target, relative to when the prime and target have no phonemic overlap. Activation of phonological representations in children has not typically been studied using this paradigm, auditory-word + picture-naming tasks being used instead. The present study employed an auditory phonological priming paradigm being developed for use with normal-hearing and hearing-impaired children. Initial results from normal-hearing adults replicate previous reports of faster naming times for targets following a rhyming prime word than for targets following a prime having no phonemes in common. Results from normal-hearing children will also be reported. [Work supported by NIH-NIDCD T32DC000039.

  10. Normal-hearing listener preferences of music as a function of signal-to-noise-ratio

    NASA Astrophysics Data System (ADS)

    Barrett, Jillian G.

    2005-04-01

    Optimal signal-to-noise ratios (SNR) for speech discrimination are well-known, well-documented phenomena. Discrimination preferences and functions have been studied for both normal-hearing and hard-of-hearing populations, and information from these studies has provided clearer indices on additional factors affecting speech discrimination ability and SNR preferences. This knowledge lends itself to improvements in hearing aids and amplification devices, telephones, television and radio transmissions, and a wide arena of recorded media such as movies and music. This investigation was designed to identify the preferred signal-to-background ratio (SBR) of normal-hearing listeners in a musical setting. The signal was the singer's voice, and music was considered the background. Subjects listened to an unfamiliar ballad with a female singer, and rated seven different SBR treatments. When listening to melodic motifs with linguistic content, results indicated subjects preferred SBRs similar to those in conventional speech discrimination applications. However, unlike traditional speech discrimination studies, subjects did not prefer increased levels of SBR. Additionally, subjects had a much larger acceptable range of SBR in melodic motifs where the singer's voice was not intended to communicate via linguistic means, but by the pseudo-paralinguistic means of vocal timbre and harmonic arrangements. Results indicate further studies investigating perception of singing are warranted.

  11. Auditory, visual, and auditory-visual perception of emotions by individuals with cochlear implants, hearing AIDS, and normal hearing.

    PubMed

    Most, Tova; Aviner, Chen

    2009-01-01

    This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.

  12. High-frequency tone burst-evoked ABR latency-intensity functions.

    PubMed

    Fausti, S A; Olson, D J; Frey, R H; Henry, J A; Schaffer, H I

    1993-01-01

    High-frequency tone burst stimuli (8, 10, 12, and 14 kHz) have been developed and demonstrated to provide reliable and valid auditory brainstem responses (ABRs) in normal-hearing subjects. In this study, latency-intensity functions (LIFs) were determined using these stimuli in 14 normal-hearing individuals. Significant shifts in response latency occurred as a function of stimulus intensity for all tone burst frequencies. For each 10 dB shift in intensity, latency shifts for waves I and V were statistically significant except for one isolated instance. LIF slopes were comparable between frequencies, ranging from 0.020 to 0.030 msec/dB. These normal LIFs for high-frequency tone burst-evoked ABRs suggest the degree of response latency change that might be expected from, for example, progressive hearing loss due to ototoxic insult, although these phenomena may not be directly related.

  13. Cortical auditory evoked potentials in the assessment of auditory neuropathy: two case studies.

    PubMed

    Pearce, Wendy; Golding, Maryanne; Dillon, Harvey

    2007-05-01

    Infants with auditory neuropathy and possible hearing impairment are being identified at very young ages through the implementation of hearing screening programs. The diagnosis is commonly based on evidence of normal cochlear function but abnormal brainstem function. This lack of normal brainstem function is highly problematic when prescribing amplification in young infants because prescriptive formulae require the input of hearing thresholds that are normally estimated from auditory brainstem responses to tonal stimuli. Without this information, there is great uncertainty surrounding the final fitting. Cortical auditory evoked potentials may, however, still be evident and reliably recorded to speech stimuli presented at conversational levels. The case studies of two infants are presented that demonstrate how these higher order electrophysiological responses may be utilized in the audiological management of some infants with auditory neuropathy.

  14. A comprehensive noise survey of the S-70A-9 Black Hawk helicopter.

    PubMed

    King, R B; Saliba, A J; Brock, J R

    1999-02-01

    This paper reports the results of a comprehensive noise survey of the Sikorsky S-70A-9 Black Hawk helicopter environment and provides an assessment of the hearing protection devices worn by Australian Army personnel exposed to that environment. At-ear noise levels were measured at 4 positions in the cabin of the Black Hawk under various flight conditions and at 13 positions outside the Black Hawk under various ground running conditions using the Head Acoustic Measurement System (Head, GmbH). The attenuation properties of the hearing protection devices (HPDs) normally worn by aircrew and maintenance crews (the ALPHA helmet and the Roanwell MX-2507 Communications headset) were also assessed. At-ear sound pressure levels that would be experienced by personnel wearing their normal HPDs were determined at the positions they would normally occupy in and around the aircraft. Results indicate that HPDs do not provide adequate hearing protection to meet current hearing conservation regulations which allow a permissible noise exposure of 85 dB(A) for an 8-h day.

  15. Vocabulary Knowledge of Children With Cochlear Implants: A Meta-Analysis

    PubMed Central

    2016-01-01

    This article employs meta-analysis procedures to evaluate whether children with cochlear implants demonstrate lower spoken-language vocabulary knowledge than peers with normal hearing. Of the 754 articles screened and 52 articles coded, 12 articles met predetermined inclusion criteria (with an additional 5 included for one analysis). Effect sizes were calculated for relevant studies and forest plots were used to compare differences between groups of children with normal hearing and children with cochlear implants. Weighted effect size averages for expressive vocabulary measures (g = −11.99; p < .001) and for receptive vocabulary measures (g = −20.33; p < .001) indicated that children with cochlear implants demonstrate lower vocabulary knowledge than children with normal hearing. Additional analyses confirmed the value of comparing vocabulary knowledge of children with hearing loss to a tightly matched (e.g., socioeconomic status-matched) sample. Age of implantation, duration of implantation, and chronological age at testing were not significantly related to magnitude of weighted effect size. Findings from this analysis represent a first step toward resolving discrepancies in the vocabulary knowledge literature. PMID:26712811

  16. Intelligibility of Telephone Speech for the Hearing Impaired When Various Microphones Are Used for Acoustic Coupling.

    ERIC Educational Resources Information Center

    Janota, Claus P.; Janota, Jeanette Olach

    1991-01-01

    Various candidate microphones were evaluated for acoustic coupling of hearing aids to a telephone receiver. Results from testing by 9 hearing-impaired adults found comparable listening performance with a pressure gradient microphone at a 10 decibel higher level of interfering noise than with a normal pressure-sensitive microphone. (Author/PB)

  17. Parental Support for Language Development during Joint Book Reading for Young Children with Hearing Loss

    ERIC Educational Resources Information Center

    DesJardin, Jean L.; Doll, Emily R.; Stika, Carren J.; Eisenberg, Laurie S.; Johnson, Karen J.; Ganguly, Dianne Hammes; Colson, Bethany G.; Henning, Shirley C.

    2014-01-01

    Parent and child joint book reading (JBR) characteristics and parent facilitative language techniques (FLTs) were investigated in two groups of parents and their young children; children with normal hearing (NH; "n" = 60) and children with hearing loss (HL; "n" = 45). Parent-child dyads were videotaped during JBR interactions,…

  18. Anxious Mothers and At-Risk Infants: The Influence of Mild Hearing Impairment on Early Interaction.

    ERIC Educational Resources Information Center

    Day, Pat Spencer; Prezioso, Carlene

    To examine the influence of imperfect audition in otherwise intact infants on early mother-infant interaction, three hard of hearing and three normally hearing infants were videotaped in interaction with their mothers. Interaction was coded, a narrative record of the mothers' nonverbal behavior was made, and transcripts of interviews with the…

  19. Individual Sensitivity to Spectral and Temporal Cues in Listeners with Hearing Impairment

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Wright, Richard A.; Blackburn, Michael C.; Tatman, Rachael; Gallun, Frederick J.

    2015-01-01

    Purpose: The present study was designed to evaluate use of spectral and temporal cues under conditions in which both types of cues were available. Method: Participants included adults with normal hearing and hearing loss. We focused on 3 categories of speech cues: static spectral (spectral shape), dynamic spectral (formant change), and temporal…

  20. Purpose of Newborn Hearing Screening

    MedlinePlus

    ... services, babies will have trouble with speech and language development. For some babies, early intervention services may include ... baby will have the best chance for normal language development if any hearing loss is discovered and treatment ...

  1. Emotional and behavioural difficulties in children and adolescents with hearing impairment: a systematic review and meta-analysis.

    PubMed

    Stevenson, Jim; Kreppner, Jana; Pimperton, Hannah; Worsfold, Sarah; Kennedy, Colin

    2015-05-01

    The aim of this study is to estimate the extent to which children and adolescents with hearing impairment (HI) show higher rates of emotional and behavioural difficulties compared to normally hearing children. Studies of emotional and behavioural difficulties in children and adolescents were traced from computerized systematic searches supplemented, where appropriate, by studies referenced in previous narrative reviews. Effect sizes (Hedges' g) were calculated for all studies. Meta-analyses were conducted on the weighted effect sizes obtained for studies adopting the Strength and Difficulties Questionnaire (SDQ) and on the unweighted effect sizes for non-SDQ studies. 33 non-SDQ studies were identified in which emotional and behavioural difficulties in children with HI could be compared to normally hearing children. The unweighted average g for these studies was 0.36. The meta-analysis of the 12 SDQ studies gave estimated effect sizes of 0.23 (95% CI 0.07, 0.40), 0.34 (95% CI 0.19, 0.49) and -0.01 (95% CI -0.32, 0.13) for Parent, Teacher and Self-ratings of Total Difficulties, respectively. The SDQ sub-scale showing consistent differences across raters between groups with HI and those with normal hearing was Peer Problems. Children and adolescents with HI have scores on emotional and behavioural difficulties measures about a quarter to a third of a standard deviation higher than hearing children. Children and adolescents with HI are in need of support to help their social relationships particularly with their peers.

  2. Differences in interregional brain connectivity in children with unilateral hearing loss.

    PubMed

    Jung, Matthew E; Colletta, Miranda; Coalson, Rebecca; Schlaggar, Bradley L; Lieu, Judith E C

    2017-11-01

    To identify functional network architecture differences in the brains of children with unilateral hearing loss (UHL) using resting-state functional-connectivity magnetic resonance imaging (rs-fcMRI). Prospective observational study. Children (7 to 17 years of age) with severe to profound hearing loss in one ear, along with their normal hearing (NH) siblings, were recruited and imaged using rs-fcMRI. Eleven children had right UHL; nine had left UHL; and 13 had normal hearing. Forty-one brain regions of interest culled from established brain networks such as the default mode (DMN); cingulo-opercular (CON); and frontoparietal networks (FPN); as well as regions for language, phonological, and visual processing, were analyzed using regionwise correlations and conjunction analysis to determine differences in functional connectivity between the UHL and normal hearing children. When compared to the NH group, children with UHL showed increased connectivity patterns between multiple networks, such as between the CON and visual processing centers. However, there were decreased, as well as aberrant connectivity patterns with the coactivation of the DMN and FPN, a relationship that usually is negatively correlated. Children with UHL demonstrate multiple functional connectivity differences between brain networks involved with executive function, cognition, and language comprehension that may represent adaptive as well as maladaptive changes. These findings suggest that possible interventions or habilitation, beyond amplification, might be able to affect some children's requirement for additional help at school. 3b. Laryngoscope, 127:2636-2645, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  3. Emotional perception of music in children with unilateral cochlear implants.

    PubMed

    Shirvani, Sareh; Jafari, Zahra; Sheibanizadeh, Abdolreza; Motasaddi Zarandy, Masoud; Jalaie, Shohre

    2014-10-01

    Cochlear implantation (CI) improves language skills among children with hearing loss. However, children with CIs still fall short of fulfilling some other needs, including musical perception. This is often attributed to the biological, technological, and acoustic limitations of CIs. Emotions play a key role in the understanding and enjoyment of music. The present study aimed to investigate the emotional perception of music in children with bilaterally severe-to-profound hearing loss and unilateral CIs. Twenty-five children with congenital severe-to-profound hearing loss and unilateral CIs and 30 children with normal hearing participated in the study. The children's emotional perceptions of music, as defined by Peretz (1998), were measured. Children were instructed to indicate happy or sad feelings fostered in them by the music by pointing to pictures of faces showing these emotions. Children with CI obtained significantly lower scores than children with normal hearing, for both happy and sad items of music as well as in overall test scores (P<0.001). Furthermore, both in CI group (P=0.49) and the control one (P<0.001), the happy items were more often recognized correctly than the sad items. Hearing-impaired children with CIs had poorer emotional perception of music than their normal peers. Due to the importance of music in the development of language, cognitive and social interaction skills, aural rehabilitation programs for children with CIs should focus particularly on music. Furthermore, it is essential to enhance the quality of musical perception by improving the quality of implant prostheses.

  4. Assessment of the maximum voluntary arm muscle contraction in sign language for the deaf.

    PubMed

    Regalo, S C H; Teixeira, V R; Vitti, M; Chaves, T C; Hallak, J E C; Bevilaqua-Grossi, D; Siriani de Oliveira, A

    2006-01-01

    The purpose of this study was to investigate the levels of upper member muscles' activation of deaf individuals, who use the Brazilian sign language - LIBRAS, comparing these findings to volunteers with no postural deviations and normal hearing Forty eight volunteers divided into two groups comprising healthy and deaf subjects (24 volunteers for each group). The signs of rest were obtained with the volunteer maintaining the upper member in an anatomical position, but with the forearm flexed and sustained by the lower member. Maximum voluntary isometric contractions (MVIC) of the biceps, triceps, deltoid, and trapezius muscles were performed in the position of muscular function testing. Statistical analysis was performed using the SPSS-10.0. Continuous data with normal distribution were analyzed by ANOVA with the significance level of p < 0.01. The normalized electromyographic muscle data obtained in muscular rest do not show statistically significant differences among the studies muscles, in both groups. In the comparison of normalized RMS values obtained in MVIC, the mean values for the trapezius muscle of deaf group were statistically lower than control group. This study's results indicate there are no differences between the levels of muscular activation for arm biceps, arm triceps, and the anterior portion of the deltoid muscle between the mean normalized RMS values of deaf and healthy individuals.

  5. Gap Detection in School-Age Children and Adults: Effects of Inherent Envelope Modulation and the Availability of Cues across Frequency

    ERIC Educational Resources Information Center

    Buss, Emily; Hall, Joseph W., III; Porter, Heather; Grose, John H.

    2014-01-01

    Purpose: The present study evaluated the effects of inherent envelope modulation and the availability of cues across frequency on behavioral gap detection with noise-band stimuli in school-age children. Method: Listeners were 34 normal-hearing children (ages 5.2-15.6 years) and 12 normal-hearing adults (ages 18.5-28.8 years). Stimuli were…

  6. An initial study of voice characteristics of children using two different sound coding strategies in comparison to normal hearing children.

    PubMed

    Coelho, Ana Cristina; Brasolotto, Alcione Ghedini; Bevilacqua, Maria Cecília

    2015-06-01

    To compare some perceptual and acoustic characteristics of the voices of children who use the advanced combination encoder (ACE) or fine structure processing (FSP) speech coding strategies, and to investigate whether these characteristics differ from children with normal hearing. Acoustic analysis of the sustained vowel /a/ was performed using the multi-dimensional voice program (MDVP). Analyses of sequential and spontaneous speech were performed using the real time pitch. Perceptual analyses of these samples were performed using visual-analogic scales of pre-selected parameters. Seventy-six children from three years to five years and 11 months of age participated. Twenty-eight were users of ACE, 23 were users of FSP, and 25 were children with normal hearing. Although both groups with CI presented with some deviated vocal features, the users of ACE presented with voice quality more like children with normal hearing than the users of FSP. Sound processing of ACE appeared to provide better conditions for auditory monitoring of the voice, and consequently, for better control of the voice production. However, these findings need to be further investigated due to the lack of comparative studies published to understand exactly which attributes of sound processing are responsible for differences in performance.

  7. Comparison of auditory comprehension skills in children with cochlear implant and typically developing children.

    PubMed

    Mandal, Joyanta Chandra; Kumar, Suman; Roy, Sumit

    2016-12-01

    The main goal of this study was to obtain auditory comprehension skills of native Hindi speaking children with cochlear implant and typically developing children across the age of 3-7 years and compare the scores between two groups. A total of sixty Hindi speaking participants were selected for the study. They were divided into two groups- Group-A consisted of thirty children with normal hearing and Group-B thirty children using cochlear implants. To assess the auditory comprehension skills, Test of auditory comprehension in Hindi (TACH) was used. The participant was required to point to one of three pictures which would best correspond to the stimulus presented. Correct answers were scored as 1 and incorrect answers as 0. TACH was administered on for both groups. Independent t-test was applied and it was found that auditory comprehension scores of children using cochlear implant were significantly poorer than the score of children with normal hearing for all three subtests. Pearson's correlation coefficient revealed poor correlation between the scores of children with normal hearing and children using cochlear implant. The results of this study suggest that children using cochlear implant have poor auditory comprehension skills than children with normal hearing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Enhancing the Induction Skill of Deaf and Hard-of-Hearing Children with Virtual Reality Technology.

    PubMed

    Passig, D; Eden, S

    2000-01-01

    Many researchers have found that for reasoning and reaching a reasoned conclusion, particularly when the process of induction is required, deaf and hard-of-hearing children have unusual difficulty. The purpose of this study was to investigate whether the practice of rotating virtual reality (VR) three-dimensional (3D) objects will have a positive effect on the ability of deaf and hard-of-hearing children to use inductive processes when dealing with shapes. Three groups were involved in the study: (1) experimental group, which included 21 deaf and hard-of-hearing children, who played a VR 3D game; (2) control group I, which included 23 deaf and hard-of-hearing children, who played a similar two-dimensional (2D) game (not VR game); and (3) control group II of 16 hearing children for whom no intervention was introduced. The results clearly indicate that practicing with VR 3D spatial rotations significantly improved inductive thinking used by the experimental group for shapes as compared with the first control group, who did not significantly improve their performance. Also, prior to the VR 3D experience, the deaf and hard-of-hearing children attained lower scores in inductive abilities than the children with normal hearing, (control group II). The results for the experimental group, after the VR 3D experience, improved to the extent that there was no noticeable difference between them and the children with normal hearing.

  9. Reading skills in Persian deaf children with cochlear implants and hearing aids.

    PubMed

    Rezaei, Mohammad; Rashedi, Vahid; Morasae, Esmaeil Khedmati

    2016-10-01

    Reading skills are necessary for educational development in children. Many studies have shown that children with hearing loss often experience delays in reading. This study aimed to examine reading skills of Persian deaf children with cochlear implant and hearing aid and compare them with normal hearing counterparts. The sample consisted of 72 s and third grade Persian-speaking children aged 8-12 years. They were divided into three equal groups including 24 children with cochlear implant (CI), 24 children with hearing aid (HA), and 24 children with normal hearing (NH). Reading performance of participants was evaluated by the "Nama" reading test. "Nama" provides normative data for hearing and deaf children and consists of 10 subtests and the sum of the scores is regarded as reading performance score. Results of ANOVA on reading test showed that NH children had significantly better reading performance than deaf children with CI and HA in both grades (P < 0.001). Post-hoc analysis, using Tukey test, indicated that there was no significant difference between HA and CI groups in terms of non-word reading, word reading, and word comprehension skills (respectively, P = 0.976, P = 0.988, P = 0.998). Considering the findings, cochlear implantation is not significantly more effective than hearing aid for improvement of reading abilities. It is clear that even with considerable advances in hearing aid technology, many deaf children continue to find literacy a challenging struggle. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Discrimination Task Reveals Differences in Neural Bases of Tinnitus and Hearing Impairment

    PubMed Central

    Husain, Fatima T.; Pajor, Nathan M.; Smith, Jason F.; Kim, H. Jeff; Rudy, Susan; Zalewski, Christopher; Brewer, Carmen; Horwitz, Barry

    2011-01-01

    We investigated auditory perception and cognitive processing in individuals with chronic tinnitus or hearing loss using functional magnetic resonance imaging (fMRI). Our participants belonged to one of three groups: bilateral hearing loss and tinnitus (TIN), bilateral hearing loss without tinnitus (HL), and normal hearing without tinnitus (NH). We employed pure tones and frequency-modulated sweeps as stimuli in two tasks: passive listening and active discrimination. All subjects had normal hearing through 2 kHz and all stimuli were low-pass filtered at 2 kHz so that all participants could hear them equally well. Performance was similar among all three groups for the discrimination task. In all participants, a distributed set of brain regions including the primary and non-primary auditory cortices showed greater response for both tasks compared to rest. Comparing the groups directly, we found decreased activation in the parietal and frontal lobes in the participants with tinnitus compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. Our results suggest that a differential engagement of a putative auditory attention and short-term memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone. PMID:22066003

  11. Maternal Interactions with a Hearing and Hearing-Impaired Twin: Similarities and Differences in Speech Input, Interaction Quality, and Word Production

    ERIC Educational Resources Information Center

    Lam, Christa; Kitamura, Christine

    2010-01-01

    Purpose: This study examined a mother's speech style and interactive behaviors with her twin sons: 1 with bilateral hearing impairment (HI) and the other with normal hearing (NH). Method: The mother was video-recorded interacting with her twin sons when the boys were 12.5 and 22 months of age. Mean F0, F0 range, duration, and F1/F2 vowel space of…

  12. Type 2 diabetes and hearing loss in personnel of the Self-Defense Forces.

    PubMed

    Sakuta, Hidenari; Suzuki, Takashi; Yasuda, Hiroko; Ito, Teizo

    2007-02-01

    The association of type 2 diabetes with hearing loss was evaluated in middle-aged male personnel of the Self-Defense Forces (SDFs). Hearing loss was defined as the pure-tone average (PTA) of the thresholds frequency at 0.5, 1, 2, and 4 kHz greater than 25 dB hearing levels (HL) in the worse ear. Diabetes status was determined by self-report of physician-diagnosed diabetes or by oral glucose tolerance test (OGTT). Of 699 subjects studied (age 52.9+/-1.0 years), 103 subjects were classified as having type 2 diabetes. Fasting plasma glucose of diabetic subjects was 120+/-19 mg/dl. Hearing loss levels were (worse) higher among diabetic subjects compared with subjects with normal glucose tolerance (NGT) (30.7+/-13.0 dB versus 27.4+/-12.3 dB, P=0.014). Hearing loss was more prevalent among diabetic subjects than among subjects with normal glucose tolerance (60.2% versus 45.2%, P=0.006). The odds ratio (OR) of type 2 diabetes for the presence of hearing loss was 1.87 (95% confidence interval 1.20-2.91, P=0.006) in a logistic regression analysis adjusted for age, rank, cigarette smoking and ethanol consumption. These results suggest that type 2 diabetes is associated with hearing loss independently of lifestyle factors in middle-aged men.

  13. The approximate number system and domain-general abilities as predictors of math ability in children with normal hearing and hearing loss.

    PubMed

    Bull, Rebecca; Marschark, Marc; Nordmann, Emily; Sapere, Patricia; Skene, Wendy A

    2018-06-01

    Many children with hearing loss (CHL) show a delay in mathematical achievement compared to children with normal hearing (CNH). This study examined whether there are differences in acuity of the approximate number system (ANS) between CHL and CNH, and whether ANS acuity is related to math achievement. Working memory (WM), short-term memory (STM), and inhibition were considered as mediators of any relationship between ANS acuity and math achievement. Seventy-five CHL were compared with 75 age- and gender-matched CNH. ANS acuity, mathematical reasoning, WM, and STM of CHL were significantly poorer compared to CNH. Group differences in math ability were no longer significant when ANS acuity, WM, or STM was controlled. For CNH, WM and STM fully mediated the relationship of ANS acuity to math ability; for CHL, WM and STM only partially mediated this relationship. ANS acuity, WM, and STM are significant contributors to hearing status differences in math achievement, and to individual differences within the group of CHL. Statement of contribution What is already known on this subject? Children with hearing loss often perform poorly on measures of math achievement, although there have been few studies focusing on basic numerical cognition in these children. In typically developing children, the approximate number system predicts math skills concurrently and longitudinally, although there have been some contradictory findings. Recent studies suggest that domain-general skills, such as inhibition, may account for the relationship found between the approximate number system and math achievement. What does this study adds? This is the first robust examination of the approximate number system in children with hearing loss, and the findings suggest poorer acuity of the approximate number system in these children compared to hearing children. The study addresses recent issues regarding the contradictory findings of the relationship of the approximate number system to math ability by examining how this relationship varies across children with normal hearing and hearing loss, and by examining whether this relationship is mediated by domain-general skills (working memory, short-term memory, and inhibition). © 2017 The British Psychological Society.

  14. The Socioeconomic Impact of Hearing Loss in US Adults

    PubMed Central

    Emmett, Susan D.; Francis, Howard W.

    2014-01-01

    Objective To evaluate the associations between hearing loss and educational attainment, income, and unemployment/underemployment in US adults. Study design National cross-sectional survey. Setting Ambulatory examination centers. Patients Adults aged 20-69 years who participated in the 1999-2002 cycles of the National Health and Nutrition Examination Survey (NHANES) audiometric evaluation and income questionnaire (n = 3379). Intervention(s) Pure tone audiometry, with hearing loss defined by World Health Organization criteria of bilateral pure tone average >25 decibels (0.5,1,2,4 kHz). Main outcome measure(s) Low educational attainment, defined as not completing high school; low income, defined as family income less than $20,000/year, and unemployment or underemployment, defined as not having a job or working less than 35 hours per week. Results Individuals with hearing loss had 3.21 times higher odds of low educational attainment (95% CI: 2.20-4.68) compared to normal-hearing individuals. Controlling for education, age, sex, and race, individuals with hearing loss had 1.58 times higher odds of low income (95% CI: 1.16-2.15) and 1.98 times higher odds of being unemployed or underemployed (95% CI: 1.38-2.85) compared to normal-hearing individuals. Conclusions Hearing loss is associated with low educational attainment in US adults. Even after controlling for education and important demographic factors, hearing loss is independently associated with economic hardship, including both low income and unemployment/underemployment. The societal impact of hearing loss is profound in this nationally representative study and should be further evaluated with longitudinal cohorts. PMID:25158616

  15. Consequences of Early Conductive Hearing Loss on Long-Term Binaural Processing.

    PubMed

    Graydon, Kelley; Rance, Gary; Dowell, Richard; Van Dun, Bram

    The aim of the study was to investigate the long-term effects of early conductive hearing loss on binaural processing in school-age children. One hundred and eighteen children participated in the study, 82 children with a documented history of conductive hearing loss associated with otitis media and 36 controls who had documented histories showing no evidence of otitis media or conductive hearing loss. All children were demonstrated to have normal-hearing acuity and middle ear function at the time of assessment. The Listening in Spatialized Noise Sentence (LiSN-S) task and the masking level difference (MLD) task were used as the two different measures of binaural interaction ability. Children with a history of conductive hearing loss performed significantly poorer than controls on all LiSN-S conditions relying on binaural cues (DV90, p = <0.001 and SV90, p = 0.003). No significant difference was found between the groups in listening conditions without binaural cues. Fifteen children with a conductive hearing loss history (18%) showed results consistent with a spatial processing disorder. No significant difference was observed between the conductive hearing loss group and the controls on the MLD task. Furthermore, no correlations were found between LiSN-S and MLD. Results show a relationship between early conductive hearing loss and listening deficits that persist once hearing has returned to normal. Results also suggest that the two binaural interaction tasks (LiSN-S and MLD) may be measuring binaural processing at different levels. Findings highlight the need for a screening measure of functional listening ability in children with a history of early otitis media.

  16. Temporal masking functions for listeners with real and simulated hearing loss

    PubMed Central

    Desloge, Joseph G.; Reed, Charlotte M.; Braida, Louis D.; Perez, Zachary D.; Delhorne, Lorraine A.

    2011-01-01

    A functional simulation of hearing loss was evaluated in its ability to reproduce the temporal masking functions for eight listeners with mild to severe sensorineural hearing loss. Each audiometric loss was simulated in a group of age-matched normal-hearing listeners through a combination of spectrally-shaped masking noise and multi-band expansion. Temporal-masking functions were obtained in both groups of listeners using a forward-masking paradigm in which the level of a 110-ms masker required to just mask a 10-ms fixed-level probe (5-10 dB SL) was measured as a function of the time delay between the masker offset and probe onset. At each of four probe frequencies (500, 1000, 2000, and 4000 Hz), temporal-masking functions were obtained using maskers that were 0.55, 1.0, and 1.15 times the probe frequency. The slopes and y-intercepts of the masking functions were not significantly different for listeners with real and simulated hearing loss. The y-intercepts were positively correlated with level of hearing loss while the slopes were negatively correlated. The ratio of the slopes obtained with the low-frequency maskers relative to the on-frequency maskers was similar for both groups of listeners and indicated a smaller compressive effect than that observed in normal-hearing listeners. PMID:21877806

  17. Excitatory, inhibitory and facilitatory frequency response areas in the inferior colliculus of hearing impaired mice.

    PubMed

    Felix, Richard A; Portfors, Christine V

    2007-06-01

    Individuals with age-related hearing loss often have difficulty understanding complex sounds such as basic speech. The C57BL/6 mouse suffers from progressive sensorineural hearing loss and thus is an effective tool for dissecting the neural mechanisms underlying changes in complex sound processing observed in humans. Neural mechanisms important for processing complex sounds include multiple tuning and combination sensitivity, and these responses are common in the inferior colliculus (IC) of normal hearing mice. We examined neural responses in the IC of C57Bl/6 mice to single and combinations of tones to examine the extent of spectral integration in the IC after age-related high frequency hearing loss. Ten percent of the neurons were tuned to multiple frequency bands and an additional 10% displayed non-linear facilitation to the combination of two different tones (combination sensitivity). No combination-sensitive inhibition was observed. By comparing these findings to spectral integration properties in the IC of normal hearing CBA/CaJ mice, we suggest that high frequency hearing loss affects some of the neural mechanisms in the IC that underlie the processing of complex sounds. The loss of spectral integration properties in the IC during aging likely impairs the central auditory system's ability to process complex sounds such as speech.

  18. Abnormal neural activities of directional brain networks in patients with long-term bilateral hearing loss.

    PubMed

    Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu

    2017-10-13

    The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.

  19. Improving speech-in-noise recognition for children with hearing loss: Potential effects of language abilities, binaural summation, and head shadow

    PubMed Central

    Nittrouer, Susan; Caldwell-Tarr, Amanda; Tarr, Eric; Lowenstein, Joanna H.; Rice, Caitlin; Moberly, Aaron C.

    2014-01-01

    Objective: This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in children’s abilities to recognize speech in noise. Design: Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. Study sample: Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. Results: Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. Conclusion: These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms. PMID:23834373

  20. How do albino fish hear?

    PubMed Central

    Lechner, W; Ladich, F

    2011-01-01

    Pigmentation disorders such as albinism are occasionally associated with hearing impairments in mammals. Therefore, we wanted to investigate whether such a phenomenon also exists in non-mammalian vertebrates. We measured the hearing abilities of normally pigmented and albinotic specimens of two catfish species, the European wels Silurus glanis (Siluridae) and the South American bronze catfish Corydoras aeneus (Callichthyidae). The non-invasive auditory evoked potential (AEP) recording technique was utilized to determine hearing thresholds at 10 frequencies from 0.05 to 5 kHz. Neither auditory sensitivity nor shape of AEP waveforms differed between normally pigmented and albinotic specimens at any frequency tested in both species. Silurus glanis and C. aeneus showed the best hearing between 0.3 and 1 kHz; the lowest thresholds were 78.4 dB at 0.5 kHz in S. glanis (pigmented), 75 dB at 1 kHz in S. glanis (albinotic), 77.6 dB at 0.5 kHz in C. aeneus (pigmented) and 76.9 dB at 1 kHz in C. aeneus (albinotic). This study indicates no association between albinism and hearing ability. Perhaps because of the lack of melanin in the fish inner ear, hearing in fishes is less likely to be affected by albinism than in mammals. PMID:21552308

  1. How do albino fish hear?

    PubMed

    Lechner, W; Ladich, F

    2011-03-01

    Pigmentation disorders such as albinism are occasionally associated with hearing impairments in mammals. Therefore, we wanted to investigate whether such a phenomenon also exists in non-mammalian vertebrates. We measured the hearing abilities of normally pigmented and albinotic specimens of two catfish species, the European wels Silurus glanis (Siluridae) and the South American bronze catfish Corydoras aeneus (Callichthyidae). The non-invasive auditory evoked potential (AEP) recording technique was utilized to determine hearing thresholds at 10 frequencies from 0.05 to 5 kHz. Neither auditory sensitivity nor shape of AEP waveforms differed between normally pigmented and albinotic specimens at any frequency tested in both species. Silurus glanis and C. aeneus showed the best hearing between 0.3 and 1 kHz; the lowest thresholds were 78.4 dB at 0.5 kHz in S. glanis (pigmented), 75 dB at 1 kHz in S. glanis (albinotic), 77.6 dB at 0.5 kHz in C. aeneus (pigmented) and 76.9 dB at 1 kHz in C. aeneus (albinotic). This study indicates no association between albinism and hearing ability. Perhaps because of the lack of melanin in the fish inner ear, hearing in fishes is less likely to be affected by albinism than in mammals.

  2. Comparison of speech recognition with adaptive digital and FM remote microphone hearing assistance technology by listeners who use hearing aids.

    PubMed

    Thibodeau, Linda

    2014-06-01

    The purpose of this study was to compare the benefits of 3 types of remote microphone hearing assistance technology (HAT), adaptive digital broadband, adaptive frequency modulation (FM), and fixed FM, through objective and subjective measures of speech recognition in clinical and real-world settings. Participants included 11 adults, ages 16 to 78 years, with primarily moderate-to-severe bilateral hearing impairment (HI), who wore binaural behind-the-ear hearing aids; and 15 adults, ages 18 to 30 years, with normal hearing. Sentence recognition in quiet and in noise and subjective ratings were obtained in 3 conditions of wireless signal processing. Performance by the listeners with HI when using the adaptive digital technology was significantly better than that obtained with the FM technology, with the greatest benefits at the highest noise levels. The majority of listeners also preferred the digital technology when listening in a real-world noisy environment. The wireless technology allowed persons with HI to surpass persons with normal hearing in speech recognition in noise, with the greatest benefit occurring with adaptive digital technology. The use of adaptive digital technology combined with speechreading cues would allow persons with HI to engage in communication in environments that would have otherwise not been possible with traditional wireless technology.

  3. The hidden effect of hearing acuity on speech recall, and compensatory effects of self-paced listening

    PubMed Central

    Piquado, Tepring; Benichov, Jonathan I.; Brownell, Hiram; Wingfield, Arthur

    2013-01-01

    Objective The purpose of this research was to determine whether negative effects of hearing loss on recall accuracy for spoken narratives can be mitigated by allowing listeners to control the rate of speech input. Design Paragraph-length narratives were presented for recall under two listening conditions in a within-participants design: presentation without interruption (continuous) at an average speech-rate of 150 words per minute; and presentation interrupted at periodic intervals at which participants were allowed to pause before initiating the next segment (self-paced). Study sample Participants were 24 adults ranging from 21 to 33 years of age. Half had age-normal hearing acuity and half had mild-to-moderate hearing loss. The two groups were comparable for age, years of formal education, and vocabulary. Results When narrative passages were presented continuously, without interruption, participants with hearing loss recalled significantly fewer story elements, both main ideas and narrative details, than those with age-normal hearing. The recall difference was eliminated when the two groups were allowed to self-pace the speech input. Conclusion Results support the hypothesis that the listening effort associated with reduced hearing acuity can slow processing operations and increase demands on working memory, with consequent negative effects on accuracy of narrative recall. PMID:22731919

  4. Interactions between amplitude modulation and frequency modulation processing: Effects of age and hearing loss.

    PubMed

    Paraouty, Nihaad; Ewert, Stephan D; Wallaert, Nicolas; Lorenzi, Christian

    2016-07-01

    Frequency modulation (FM) and amplitude modulation (AM) detection thresholds were measured for a 500-Hz carrier frequency and a 5-Hz modulation rate. For AM detection, FM at the same rate as the AM was superimposed with varying FM depth. For FM detection, AM at the same rate was superimposed with varying AM depth. The target stimuli always contained both amplitude and frequency modulations, while the standard stimuli only contained the interfering modulation. Young and older normal-hearing listeners, as well as older listeners with mild-to-moderate sensorineural hearing loss were tested. For all groups, AM and FM detection thresholds were degraded in the presence of the interfering modulation. AM detection with and without interfering FM was hardly affected by either age or hearing loss. While aging had an overall detrimental effect on FM detection with and without interfering AM, there was a trend that hearing loss further impaired FM detection in the presence of AM. Several models using optimal combination of temporal-envelope cues at the outputs of off-frequency filters were tested. The interfering effects could only be predicted for hearing-impaired listeners. This indirectly supports the idea that, in addition to envelope cues resulting from FM-to-AM conversion, normal-hearing listeners use temporal fine-structure cues for FM detection.

  5. Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.

    PubMed

    Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi

    2015-08-01

    To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  6. Are normally sighted, visually impaired, and blind pedestrians accurate and reliable at making street crossing decisions?

    PubMed

    Hassan, Shirin E

    2012-05-04

    The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times.

  7. Are Normally Sighted, Visually Impaired, and Blind Pedestrians Accurate and Reliable at Making Street Crossing Decisions?

    PubMed Central

    Hassan, Shirin E.

    2012-01-01

    Purpose. The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Methods. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. Results. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Conclusions. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times. PMID:22427593

  8. Computed tomography demonstrates abnormalities of contralateral ear in subjects with unilateral sensorineural hearing loss.

    PubMed

    Marcus, Sonya; Whitlow, Christopher T; Koonce, James; Zapadka, Michael E; Chen, Michael Y; Williams, Daniel W; Lewis, Meagan; Evans, Adele K

    2014-02-01

    Prior studies have associated gross inner ear abnormalities with pediatric sensorineural hearing loss (SNHL) using computed tomography (CT). No studies to date have specifically investigated morphologic inner ear abnormalities involving the contralateral unaffected ear in patients with unilateral SNHL. The purpose of this study is to evaluate contralateral inner ear structures of subjects with unilateral SNHL but no grossly abnormal findings on CT. IRB-approved retrospective analysis of pediatric temporal bone CT scans. 97 temporal bone CT scans, previously interpreted as "normal" based upon previously accepted guidelines by board certified neuroradiologists, were assessed using 12 measurements of the semicircular canals, cochlea and vestibule. The control-group consisted of 72 "normal" temporal bone CTs with underlying SNHL in the subject excluded. The study-group consisted of 25 normal-hearing contralateral temporal bones in subjects with unilateral SNHL. Multivariate analysis of covariance (MANCOVA) was then conducted to evaluate for differences between the study and control group. Cochlea basal turn lumen width was significantly greater in magnitude and central lucency of the lateral semicircular canal bony island was significantly lower in density for audiometrically normal ears of subjects with unilateral SNHL compared to controls. Abnormalities of the inner ear were present in the contralateral audiometrically normal ears of subjects with unilateral SNHL. These data suggest that patients with unilateral SNHL may have a more pervasive disease process that results in abnormalities of both ears. The findings of a cochlea basal turn lumen width disparity >5% from "normal" and/or a lateral semicircular canal bony island central lucency disparity of >5% from "normal" may indicate inherent risk to the contralateral unaffected ear in pediatric patients with unilateral sensorineural hearing loss. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Unilateral Hearing Loss Is Associated With Impaired Balance in Children: A Pilot Study.

    PubMed

    Wolter, Nikolaus E; Cushing, Sharon L; Vilchez-Madrigal, Luis D; James, Adrian L; Campos, Jennifer; Papsin, Blake C; Gordon, Karen A

    2016-12-01

    To determine if children with unilateral sensorineural hearing loss (UHL) demonstrate impaired balance compared with their normal hearing (NH) peers. Prospective, case-control study. Balance was assessed in14 UHL and 14 NH children using the Bruininks-Oseretsky Test-2 (BOT-2) and time to fall (TTF) in an immersive, virtual-reality laboratory. Postural control was quantified by center of pressure (COP) using force plates. The effect of vision on balance was assessed by comparing scores and COP characteristics on BOT-2 tasks performed with eyes open and closed. Balance ability as measured by the BOT-2 score was significantly worse in children with UHL compared with NH children (p = 0.004). TTF was shorter in children with UHL compared with NH children in the most difficult tasks when visual and somatosensory inputs were limited (p < 0.01). Visual input improved postural control (reduced COP variability) in both groups in all tasks (p < 0.05) but postural control as measured by COP variability was more affected in children with UHL when visual input was removed while performing moderately difficult tasks (i.e., standing on one foot) (p = 0.02). In this pilot study, children with UHL show poorer balance skills than NH children. Significant differences in TTF between the two groups were only seen in the most difficult tasks and therefore may be missed on routine clinical assessment. Children with UHL appear to rely more on vision for maintaining postural control than their NH peers. These findings may point to deficits not only in the hearing but also the vestibular portion of the inner ear.

  10. 16 CFR 1605.8 - Rights of witnesses at investigational hearings and of deponents at depositions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of a sole proprietorship, partnership, or corporation who is required to produce documentary evidence... produce documentary evidence or give testimony at an investigational hearing or deposition cannot act as... standards of orderly and ethical conduct are maintained. Such designee shall, for reasons stated on the...

  11. Maintaining High Expectations

    ERIC Educational Resources Information Center

    Williams, Roger; Williams, Sherry

    2014-01-01

    Author and husband, Roger Williams, is hearing and signs fluently, and author and wife, Sherry Williams, is deaf and uses both speech and signs, although she is most comfortable signing. As parents of six children--deaf and hearing--they are determined to encourage their children to do their best, and they always set their expectations high. They…

  12. Hearings on Reform of the U.S. Workforce Preparation System. Hearings before the Subcommittee on Postsecondary Education, Training, and Life-Long Learning of the Committee on Economic and Educational Opportunities, House of Representatives, One Hundred Fourth Congress, First Session (February 6-7, 1995).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Economic and Educational Opportunities.

    This publication presents two hearings on how to establish and maintain a streamlined, top quality, and efficient system of work force preparation in the United States and the role of the federal government in developing such a system. Testimony consists of statements and prepared statements, letters, and supplemental materials from individuals…

  13. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    PubMed

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.

  14. Longitudinal Development of Phonology and Morphology in Children with Late-Identified Mild-Moderate Sensorineural Hearing Loss

    PubMed Central

    Moeller, Mary Pat; McCleary, Elizabeth; Putman, Coille; Tyler-Krings, Amy; Hoover, Brenda; Stelmachowicz, Patricia

    2010-01-01

    Objective Studies of language development in children with mild-moderate hearing loss are relatively rare. Longitudinal studies of children with late-identified hearing loss have not been conducted, and they are relevant for determining how a period of unaided mild-moderate hearing loss impacts development. In recent years, newborn hearing screening programs have effectively reduced the ages of identification for most children with permanent hearing loss. However, some children continue to be identified late and research is needed to guide management decisions. Further, studies of this group may help to discern if language normalizes following intervention, and/or if certain aspects of language might be vulnerable to persistent delays. The current study examines the impact of late identification and reduced audibility on speech and language outcomes via a longitudinal study of four children with mild-moderate sensorineural hearing loss. Design Longitudinal outcomes of four children with late-identified mild-moderate sensorinueral hearing loss were studied using standardized measures and language sampling procedures, from at or near the point of identification (28 – 41 months) through 84 months of age. The children with hearing loss were compared to ten age-matched children with normal hearing on a majority of the measures through 60 months of age. Spontaneous language samples were collected from mother-child interaction sessions, recorded at consistent intervals in a laboratory-based play setting. Transcripts were analyzed using computer-based procedures (Systematic Analysis of Language Transcripts) and the Index of Productive Syntax. Possible influences of audibility were explored by examining the onset and productive use of a set of verb tense markers, and by monitoring the children’s accuracy in use of morphological endings. Phonological samples at baseline were transcribed and analyzed using Computerized Profiling. Results At entry to the study, the four children with hearing loss demonstrated language delays, with pronounced delays in phonological development. Three of the four children demonstrated rapid progress with development and interventions, and performed within the average range on standardized speech and language measures compared to age-matched children by 60-months of age. However, persistent differences from children with normal hearing were observed in the areas of morphosyntax, speech intelligibility in conversation, and production of fricatives. Children with mild-moderate hearing loss demonstrated later than typical emergence of certain verb tense markers, which may be related to reduced or inconsistent audibility. Conclusions The results of this study suggest that early communication delays will resolve for children with late-identified mild-moderate hearing loss, given appropriate amplification and intervention services. A positive result is that three of four children demonstrated normalization of broad language behaviors by 60-months of age, in spite of significant delays at baseline. However, these children are at risk for persistent delays in phonology at the conversational level and for accuracy in use of morphological markers. The ways in which reduced auditory experiences and audibility may contribute to these delays are explored, along with implications for evaluation of outcomes. PMID:20548239

  15. Munchausen Syndrome by Proxy: Mother Fabricates Infant's Hearing Impairment.

    ERIC Educational Resources Information Center

    Kahn, Gerri; Goldman, Ellen

    1991-01-01

    Case study reports a case of Munchausen Syndrome by Proxy, a form of child abuse in which the mother presents a child for treatment for a condition she herself has invented or created. This case study describes the ways in which a mother obtained a diagnosis of sensorineural hearing loss as well as amplification for her normally hearing infant.…

  16. Bullying and Cyberbullying among Deaf Students and Their Hearing Peers: An Exploratory Study

    ERIC Educational Resources Information Center

    Bauman, Sheri; Pero, Heather

    2011-01-01

    A questionnaire on bullying and cyberbullying was administered to 30 secondary students (Grades 7-12) in a charter school for the Deaf and hard of hearing and a matched group of 22 hearing students in a charter secondary school on the same campus. Because the sample size was small and distributions non-normal, results are primarily descriptive and…

  17. The Mathematical and Science Skills of Students Who Are Deaf or Hard of Hearing Educated in Inclusive Settings

    ERIC Educational Resources Information Center

    Vosganoff, Diane; Paatsch, Louise E.; Toe, Dianne M.

    2011-01-01

    This study examined the science and mathematics achievements of 16 Year 9 students with hearing loss in an inclusive high-school setting in Western Australia. Results from the Monitoring Standards in Education (MSE) compulsory state tests were compared with state and class averages for students with normal hearing. Data were collected from three…

  18. Children's Performance in Complex Listening Conditions: Effects of Hearing Loss and Digital Noise Reduction

    ERIC Educational Resources Information Center

    Pittman, Andrea

    2011-01-01

    Purpose: To determine the effect of hearing loss (HL) on children's performance for an auditory task under demanding listening conditions and to determine the effect of digital noise reduction (DNR) on that performance. Method: Fifty children with normal hearing (NH) and 30 children with HL (8-12 years of age) categorized words in the presence of…

  19. The Locus Equation as an Index of Coarticulation in Syllables Produced by Speakers with Profound Hearing Loss

    ERIC Educational Resources Information Center

    McCaffrey Morrison, Helen

    2008-01-01

    Locus equations (LEs) were derived from consonant-vowel-consonant (CVC) syllables produced by four speakers with profound hearing loss. Group data indicated that LE functions obtained for the separate CVC productions initiated by /b/, /d/, and /g/ were less well-separated in acoustic space than those obtained from speakers with normal hearing. A…

  20. Components of Story Comprehension and Strategies to Support Them in Hearing and Deaf or Hard of Hearing Readers

    ERIC Educational Resources Information Center

    Sullivan, Susan; Oakhill, Jane

    2015-01-01

    In this article, we review the skills that have been found to be related to good story comprehension in novice readers with normal hearing and describe the relative weight each plays. The relationship between effective story comprehension and lower level skills (such as syntactic awareness and vocabulary knowledge) is considered, and the casual…

  1. Accuracy of Consonant-Vowel Syllables in Young Cochlear Implant Recipients and Hearing Children in the Single-Word Period

    ERIC Educational Resources Information Center

    Warner-Czyz, Andrea D.; Davis, Barbara L.; MacNeilage, Peter F.

    2010-01-01

    Purpose: Attaining speech accuracy requires that children perceive and attach meanings to vocal output on the basis of production system capacities. Because auditory perception underlies speech accuracy, profiles for children with hearing loss (HL) differ from those of children with normal hearing (NH). Method: To understand the impact of auditory…

  2. Use of a glimpsing model to understand the performance of listeners with and without hearing loss in spatialized speech mixtures

    PubMed Central

    Best, Virginia; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Kidd, Gerald

    2017-01-01

    In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the “symmetric masker” paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduced target audibility (rather than a spatial deficit per se) under conditions of spatial separation may contribute to the observed deficit. In this study a simple “glimpsing” model (applied separately to each ear) was used to isolate the target information that is potentially available in binaural speech mixtures. Intelligibility of these glimpsed stimuli was then measured directly. Differences between normally hearing and hearing-impaired listeners observed in the natural binaural condition persisted for the glimpsed condition, despite the fact that the task no longer required segregation or spatial processing. This result is consistent with the idea that the performance of listeners with hearing loss in the spatialized mixture was limited by their ability to identify the target speech based on sparse glimpses, possibly as a result of some of those glimpses being inaudible. PMID:28147587

  3. Individual Sensitivity to Spectral and Temporal Cues in Listeners With Hearing Impairment

    PubMed Central

    Wright, Richard A.; Blackburn, Michael C.; Tatman, Rachael; Gallun, Frederick J.

    2015-01-01

    Purpose The present study was designed to evaluate use of spectral and temporal cues under conditions in which both types of cues were available. Method Participants included adults with normal hearing and hearing loss. We focused on 3 categories of speech cues: static spectral (spectral shape), dynamic spectral (formant change), and temporal (amplitude envelope). Spectral and/or temporal dimensions of synthetic speech were systematically manipulated along a continuum, and recognition was measured using the manipulated stimuli. Level was controlled to ensure cue audibility. Discriminant function analysis was used to determine to what degree spectral and temporal information contributed to the identification of each stimulus. Results Listeners with normal hearing were influenced to a greater extent by spectral cues for all stimuli. Listeners with hearing impairment generally utilized spectral cues when the information was static (spectral shape) but used temporal cues when the information was dynamic (formant transition). The relative use of spectral and temporal dimensions varied among individuals, especially among listeners with hearing loss. Conclusion Information about spectral and temporal cue use may aid in identifying listeners who rely to a greater extent on particular acoustic cues and applying that information toward therapeutic interventions. PMID:25629388

  4. Optimal electrode selection for multi-channel electroencephalogram based detection of auditory steady-state responses.

    PubMed

    Van Dun, Bram; Wouters, Jan; Moonen, Marc

    2009-07-01

    Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.

  5. Cognitive skills and academic achievement of deaf children with cochlear implants.

    PubMed

    Huber, Maria; Kipman, Ulrike

    2012-10-01

    To compare cognitive performance between children with cochlear implants (CI) and normal-hearing peers; provide information about correlations between cognitive performance, basic academic achievement, and medical/audiological and social background variables; and assess the predictor quality of these variables for cognition. Cross-sectional study with comparison group, diagnostic test assessment. Data were collected in the authors' clinic (children with CI) and in Austrian schools (normal-hearing children). Forty children with CI (of the initial 65 children eligible for this study), aged 7 to 11 years, and 40 normal-hearing children, matched by age and sex, were tested with (a) the Culture Fair Intelligence Test (CFIT); (b) the Number Sequences subtest of the Heidelberger Rechentest 1-4 (HRT); (c) Comprehension, (d) Coding, (e) Digit Span, and (f) Vocabulary subtests of HAWIK III (German WISC III); (g) the Corsi Block Tapping Test; (h) the Arithmetic Operations subtests of the HRT; and (i) Salzburger Lese-Screening (SLS, reading). In addition, medical, audiological, social, and educational data from children with CI were collected. The children with CI equaled normal-hearing children in (a), (d), (e), (g), (h), and (i) and performed significantly worse in (b), (c) and (f). Background variables correlate significantly with cognitive skills and academic achievement. Medical/audiological variables explain 44.3% of the variance in CFT1 (CFIT, younger children). Social variables explain 55% of CFT1 and 24.5% of the Corsi test. This study augments the knowledge about cognitive skills and academic skills of children with CI. Cognitive performance is dependent on the early feasibility to hear and the social/educational background of the family.

  6. Cochlear implanted children present vocal parameters within normal standards.

    PubMed

    de Souza, Lourdes Bernadete Rocha; Bevilacqua, Maria Cecília; Brasolotto, Alcione Ghedini; Coelho, Ana Cristina

    2012-08-01

    to compare acoustic and perceptual parameters regarding the voice of cochlear implanted children, with normal hearing children. this is a cross-sectional, quantitative and qualitative study. Thirty six cochlear implanted children aged between 3 y and 3 m to 5 y and 9 m and 25 children with normal hearing, aged between 3 y and 11 m and 6 y and 6 m, participated in this study. The recordings and the acoustics analysis of the sustained vowel/a/and spontaneous speech were performed using the PRAAT program. The parameters analyzed for the sustained vowel were the mean of the fundamental frequency, jitter, shimmer and harmonic-to-noise ratio (HNR). For the spontaneous speech, the minimum and maximum frequencies and the number of semitones were extracted. The perceptual analysis of the speech material was analyzed using visual-analogical scales of 100 points, composing the aspects related to the overall severity of the vocal deviation, roughness, breathiness, strain, pitch, loudness and resonance deviation, and instability. This last parameter was only analyzed for the sustained vowel. The results demonstrated that the majority of the vocal parameters analyzed in the samples of the implanted children disclosed values similar to those obtained by the group of children with normal hearing. implanted children who participate in a (re) habilitation and follow-up program, can present vocal characteristics similar to those vocal characteristics of children with normal hearing. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Clinical Study to Evaluate the Association Between Sensorineural Hearing Loss and Diabetes Mellitus in Poorly Controlled Patients Whose HbA1c >8.

    PubMed

    Srinivas, C V; Shyamala, V; Shiva Kumar, B R

    2016-06-01

    The relationship between sensorineural hearing loss (SNHL) and Diabetes mellitus has been known since more than 150 years. The pathophysiology of diabetes related hearing loss is speculative. Hearing loss is usually, bilateral, gradual onset, affecting higher frequencies. This study aims at knowing the prevalence of SNHL in DM and its relation to age, sex, duration of DM and control of DM. A total of 50 type 2 diabetics of age group 30-65 years were involved in the study. FBS, PPBS, HbA1c of all the subjects were done and later subjected to PTA. The type and severity of hearing loss was noted. Occurrence of SNHL was later compared with age, sex, duration, and control of DM. Sensorineural hearing loss was found in 66 % of type II diabetic patients and 34 % were found normal. Out of 50 diabetes mellitus patients, 33 patients had SNHL. All cases of SNHL detected were of gradual in onset and no one had hearing loss of sudden onset. Normal hearing was found in 34 % of patients, whereas 54 % of patients had mild hearing loss and 12 % of patients had moderate hearing loss. Association of hearing loss of DM patients with sex of the patient is insignificant. However there is significant association between older age group, longer duration and uncontrolled DM with that of SNHL. In subjects with HbA1c more than 8 and duration of diabetes mellitus more than 10 years prevalence of SNHL is more than 85 %, which is statistically significant. Sensorineural hearing loss in diabetes mellitus is gradually progressive involving high frequency thresholds. Hearing threshold increases with increasing age duration of diabetes and also high level of HbA1c greater than 8 %.

  8. Evaluation of Critical Bandwidth Using Digitally Processed Speech.

    DTIC Science & Technology

    1982-05-12

    observed after re- peating the two tests on persons with confirmed cases of sensorineural hearing impairment. Again, the plotted speech discrimination...quantifying the critical bandwidth of persons on a cli- nical or pre-employment level. The complex portion of the test design (the computer generation of...34super" normal hearing indi- viduals (i.e., those persons with narrower-than-normal cri- tical bands). This ability of the test shows promise as a valuable

  9. How Children with Normal Hearing and Children with a Cochlear Implant Use Mentalizing Vocabulary and Other Evaluative Expressions in Their Narratives

    ERIC Educational Resources Information Center

    Huttunen, Kerttu; Ryder, Nuala

    2012-01-01

    This study explored the use of mental state and emotion terms and other evaluative expressions in the story generation of 65 children (aged 2-8 years) with normal hearing (NH) and 11 children (aged 3-7 years) using a cochlear implant (CI). Children generated stories on the basis of sets of sequential pictures. The stories of the children with CI…

  10. Individual differences in selective attention predict speech identification at a cocktail party.

    PubMed

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-08-31

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise.

  11. Effect of simulated bilateral cochlear distortion on speech discrimination in normal subjects.

    PubMed

    Hood, J D; Prasher, D K

    1990-01-01

    Bilateral sensorineural hearing loss may introduce grossly dissimilar cochlear distortion at the two ears, causing abnormal demands to be made upon the cortical analytical centres which normally receive congruent information. As a result, the prescription of binaural hearing aids may be a handicap rather than a help. In order to explore this possibility, 10 normal subjects were presented with simulated, dissimilar cochlear distortion at the two ears. Discrimination scores with binaural presentation were poorer than the best monaural score and there were clear indications that in the former, subjects selectively attended to one ear and neglected the other. In contrast, binaural presentation of the same simulated distortion resulted in a significant improvement, compared with the monaural discrimination score. Inability of the cortex to contend with discongruent speech input from the two ears may be a factor contributing to the rejection of binaural hearing aids in some individuals.

  12. Extension of Effective Date for Temporary Pilot Program Setting the Time and Place for a Hearing Before an Administrative Law Judge. Final rule.

    PubMed

    2015-07-02

    We are extending for one year our pilot program that authorizes the agency to set the time and place for a hearing before an administrative law judge (ALJ). Extending of the pilot program continues our commitment to improve the efficiency of our hearing process and to maintain a hearing process that results in accurate, high-quality decisions for claimants. The current pilot program will expire on August 10, 2015. In this final rule, we are extending the effective date to August 12, 2016. We are making no other substantive changes.

  13. Hyperventilation-induced nystagmus in vestibular schwannoma and unilateral sensorineural hearing loss.

    PubMed

    Mandalà, Marco; Giannuzzi, Annalisa; Astore, Serena; Trabalzini, Franco; Nuti, Daniele

    2013-07-01

    We evaluated the incidence and characteristics of hyperventilation-induced nystagmus (HVN) in 49 patients with gadolinium-enhanced magnetic resonance imaging evidence of vestibular schwannoma and 53 patients with idiopathic unilateral sensorineural hearing loss and normal radiological findings. The sensitivity and specificity of the hyperventilation test were compared with other audio-vestibular diagnostic tests (bedside examination of eye movements, caloric test, auditory brainstem responses) in the two groups of patients. The hyperventilation test scored the highest diagnostic efficiency (sensitivity 65.3 %; specificity 98.1 %) of the four tests in the differential diagnosis of vestibular schwannoma and idiopathic unilateral sensorineural hearing loss. Small tumors with a normal caloric response or caloric paresis were associated with ipsilateral HVN and larger tumors and severe caloric deficits with contralateral HVN. These results confirm that the hyperventilation test is a useful diagnostic test for predicting vestibular schwannoma in patients with unilateral sensorineural hearing loss.

  14. Quantity processing in deaf and hard of hearing children: evidence from symbolic and nonsymbolic comparison tasks.

    PubMed

    Rodríguez-Santos, José Miguel; Calleja, Marina; García-Orza, Javier; Iza, Mauricio; Damas, Jesús

    2014-01-01

    Deaf children usually achieve lower scores on numerical tasks than normally hearing peers. Explanations for mathematical disabilities in hearing children are based on quantity representation deficits (Geary, 1994) or on deficits in accessing these representations (Rousselle & Noël, 2008). The present study aimed to verify, by means of symbolic (Arabic digits) and nonsymbolic (dot constellations and hands) magnitude comparison tasks, whether deaf children show deficits in representations or in accessing numerical representations. The study participants were 10 prelocutive deaf children and 10 normally hearing children. Numerical distance and magnitude were manipulated. Response time (RT) analysis showed similar magnitude and distance effects in both groups on the 3 tasks. However, slower RTs were observed among the deaf participants on the symbolic task alone. These results suggest that although both groups' quantity representations were similar, the deaf group experienced a delay in accessing representations from symbolic codes.

  15. Analysis of subtle auditory dysfunctions in young normal-hearing subjects affected by Williams syndrome.

    PubMed

    Paglialonga, Alessia; Barozzi, Stefania; Brambilla, Daniele; Soi, Daniela; Cesarani, Antonio; Spreafico, Emanuela; Tognola, Gabriella

    2014-11-01

    To assess if young subjects affected by Williams syndrome (WS) with normal middle ear functionality and normal hearing thresholds might have subtle auditory dysfunctions that could be detected by using clinically available measurements. Otoscopy, acoustic reflexes, tympanometry, pure-tone audiometry, and distortion product otoacoustic emissions (DPOAEs) were measured in a group of 13 WS subjects and in 13 age-matched, typically developing control subjects. Participants were required to have normal otoscopy, A-type tympanogram, normal acoustic reflex thresholds, and pure-tone thresholds≤15 dB HL at 0.5, 1, and 2 kHz bilaterally. To limit the possible influence of middle ear status on DPOAE recordings, we analyzed only data from ears with pure-tone thresholds≤15 dB HL across all octave frequencies in the range 0.25-8 kHz, middle ear pressure (MEP)>-50 daPa, static compliance (SC) in the range 0.3-1.2 cm3, and ear canal volume (ECV) in the range 0.2-2 ml, and we performed analysis of covariance to remove the possible effects of middle ear variables on DPOAEs. No differences in mean hearing thresholds, SC, ECV, and gradient were observed between the two groups, whereas significantly lower MEP values were found in WS subjects as well as significantly decreased DPOAEs up to 3.2 kHz after adjusting for differences in middle ear status. Results revealed that WS subjects with normal hearing thresholds (≤15 dB HL) and normal middle ear functionality (MEP>-50 daPa, SC in the range 0.3-1.2 cm3, ECV in the range 0.2-2 ml) might have subtle auditory dysfunctions that can be detected by using clinically available methods. Overall, this study points out the importance of using otoacoustic emissions as a complement to routine audiological examinations in individuals with WS to detect, before the onset of hearing loss, possible subtle auditory dysfunctions so that patients can be early identified, better monitored, and promptly treated. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. How to quantify binaural hearing in patients with unilateral hearing using hearing implants.

    PubMed

    Snik, Ad; Agterberg, Martijn; Bosman, Arjan

    2015-01-01

    Application of bilateral hearing devices in bilateral hearing loss and unilateral application in unilateral hearing loss (second ear with normal hearing) does not a priori lead to binaural hearing. An overview is presented on several measures of binaural benefits that have been used in patients with unilateral or bilateral deafness using one or two cochlear implants, respectively, and in patients with unilateral or bilateral conductive/mixed hearing loss using one or two percutaneous bone conduction implants (BCDs), respectively. Overall, according to this overview, the most significant and sensitive measure is the benefit in directional hearing. Measures using speech (viz. binaural summation, binaural squelch or use of the head shadow effect) showed minor benefits, except for patients with bilateral conductive/mixed hearing loss using two BCDs. Although less feasible in daily practise, the binaural masking level difference test seems to be a promising option in the assessment of binaural function. © 2015 S. Karger AG, Basel.

  17. Impact of hearing loss in the workplace: raising questions about partnerships with professionals.

    PubMed

    Jennings, Mary Beth; Shaw, Lynn

    2008-01-01

    The number of adults with hearing loss who continue to work later in life is growing. Persons with hearing loss are generally unaware of the role that audiologists, occupational therapists, and vocational rehabilitation counsellors might play in the assessment of the workplace environment and appropriate accommodations. Three narratives of adults with hearing loss are used to demonstrate the gaps in accessing information, technology and services needed to maintain optimal work performance and productivity. The lack of recognition of the multidimensional needs of older workers with hearing loss and the lack of timely coordination of services led to all three persons acting alone in trying to access services and supports. In two of the three cases the impact of the hearing loss resulted in further unexpected losses such as the loss of employment and the loss of a worker-identity. There is an urgent need for partnering with persons who are hard of hearing to develop new strategies for knowledge exchange, more thorough assessment of hearing demands and modifications in the workplace, and interdisciplinary approaches to service specific to the needs of hard of hearing persons.

  18. What Can We Learn about Auditory Processing from Adult Hearing Questionnaires?

    PubMed

    Bamiou, Doris-Eva; Iliadou, Vasiliki Vivian; Zanchetta, Sthella; Spyridakou, Chrysa

    2015-01-01

    Questionnaires addressing auditory disability may identify and quantify specific symptoms in adult patients with listening difficulties. (1) To assess validity of the Speech, Spatial, and Qualities of Hearing Scale (SSQ), the (Modified) Amsterdam Inventory for Auditory Disability (mAIAD), and the Hyperacusis Questionnaire (HYP) in adult patients experiencing listening difficulties in the presence of a normal audiogram. (2) To examine which individual questionnaire items give the worse scores in clinical participants with an auditory processing disorder (APD). A prospective correlational analysis study. Clinical participants (N = 58) referred for assessment because of listening difficulties in the presence of normal audiometric thresholds to audiology/ear, nose, and throat or audiovestibular medicine clinics. Normal control participants (N = 30). The mAIAD, HYP, and the SSQ were administered to a clinical population of nonneurological adults who were referred for auditory processing (AP) assessment because of hearing complaints, in the presence of normal audiogram and cochlear function, and to a sample of age-matched normal-hearing controls, before the AP testing. Clinical participants with abnormal results in at least one ear and in at least two tests of AP (and at least one of these tests to be nonspeech) were classified as clinical APD (N = 39), and the remaining (16 of whom had a single test abnormality) as clinical non-APD (N = 19). The SSQ correlated strongly with the mAIAD and the HYP, and correlation was similar within the clinical group and the normal controls. All questionnaire total scores and subscores (except sound distinction of mAIAD) were significantly worse in the clinical APD versus the normal group, while questionnaire total scores and most subscores indicated greater listening difficulties for the clinical non-APD versus the normal subgroups. Overall, the clinical non-APD group tended to give better scores than the APD in all questionnaires administered. Correlation was strong for the worse-ear gaps-in-noise threshold with the SSQ, mAIAD, and HYP; strong to moderate for the speech in babble and left-ear dichotic digit test scores (at p < 0.01); and weak to moderate for the remaining AP tests except the frequency pattern test that did not correlate. The worse-scored items in all three questionnaires concerned speech-in-noise questions. This is similar to worse-scored items by hearing-impaired participants as reported in the literature. Worse-scored items of the clinical group also included quality aspects of listening questions from the SSQ, which most likely pertain to cognitive aspects of listening, such as ability to ignore other sounds and listening effort. Hearing questionnaires may help assess symptoms of adults with APD. The listening difficulties and needs of adults with APD to some extent overlap with those of hearing-impaired listeners, but there are significant differences. The correlation of the gaps-in-noise and duration pattern (but not frequency pattern) tests with the questionnaire scores indicates that temporal processing deficits may play an important role in clinical presentation. American Academy of Audiology.

  19. Molecular approach of auditory neuropathy.

    PubMed

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  20. QUALITY OF LIFE IN CHILDREN WITH HEARING IMPAIRMENT: SYSTEMATIC REVIEW AND META-ANALYSIS

    PubMed Central

    Roland, Lauren; Fischer, Caroline; Tran, Kayla; Rachakonda, Tara; Kallogjeri, Dorina; Lieu, Judith

    2017-01-01

    Objective To determine the impact of pediatric hearing loss on quality of life (QOL). Data Sources A qualified medical librarian conducted a literature search for relevant publications that evaluate QOL in school-aged children with hearing loss (HL). Review Methods Studies were assessed independently by two reviewers for inclusion in the systematic review and meta-analysis. Results From 979 abstracts, 69 were identified as relevant; ultimately 41 articles were included in the systematic review. This review revealed that children with HL generally report a lower QOL than their normal hearing peers, and QOL improves after interventions. The extent of these differences is variable among studies, and depends on the QOL measure. Four studies using the Pediatric Quality of Life Inventory (PedsQL) had sufficient data for inclusion in a meta-analysis. After pooling studies, statistically and clinically significant differences in PedsQL scores were found between children with normal hearing and those with HL, specifically in the Social and School domains. Statistically significant differences were also noted in in total scores for children with unilateral HL and in the physical domain for children with bilateral HL as compared to normal hearing, however these differences were not clinically meaningful. Conclusions Our analysis reveals that decreased QOL in children with HL is detected in distinct domains of the PedsQL questionnaire. These domains of school functioning and social interactions are especially important for development and learning. Future work should focus on these specific aspects of QOL when assessing HL in the pediatric population. PMID:27118820

  1. Effect of conductive hearing loss on central auditory function.

    PubMed

    Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher

    It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: p<0.001). Individuals with CHL had significantly lower correct responses than individuals with normal hearing for both sides (p<0.001). No correlation was found between GIN performance and degree of hearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  2. Emotional Perception of Music in Children with Unilateral Cochlear Implants

    PubMed Central

    Shirvani, Sareh; Jafari, Zahra; Sheibanizadeh, Abdolreza; Motasaddi Zarandy, Masoud; Jalaie, Shohre

    2014-01-01

    Introduction: Cochlear implantation (CI) improves language skills among children with hearing loss. However, children with CIs still fall short of fulfilling some other needs, including musical perception. This is often attributed to the biological, technological, and acoustic limitations of CIs. Emotions play a key role in the understanding and enjoyment of music. The present study aimed to investigate the emotional perception of music in children with bilaterally severe-to-profound hearing loss and unilateral CIs. Materials and Methods: Twenty-five children with congenital severe-to-profound hearing loss and unilateral CIs and 30 children with normal hearing participated in the study. The children’s emotional perceptions of music, as defined by Peretz (1998), were measured. Children were instructed to indicate happy or sad feelings fostered in them by the music by pointing to pictures of faces showing these emotions. Results: Children with CI obtained significantly lower scores than children with normal hearing, for both happy and sad items of music as well as in overall test scores (P<0.001). Furthermore, both in CI group (P=0.49) and the control one (P<0.001), the happy items were more often recognized correctly than the sad items. Conclusion: Hearing-impaired children with CIs had poorer emotional perception of music than their normal peers. Due to the importance of music in the development of language, cognitive and social interaction skills, aural rehabilitation programs for children with CIs should focus particularly on music. Furthermore, it is essential to enhance the quality of musical perception by improving the quality of implant prostheses. PMID:25320700

  3. Monitoring auditory cortical plasticity in hearing aid users with long latency auditory evoked potentials: a longitudinal study.

    PubMed

    Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Bento, Ricardo Ferreira; Matas, Carla Gentile

    2018-02-19

    The objective of this study was to compare long-latency auditory evoked potentials before and after hearing aid fittings in children with sensorineural hearing loss compared with age-matched children with normal hearing. Thirty-two subjects of both genders aged 7 to 12 years participated in this study and were divided into two groups as follows: 14 children with normal hearing were assigned to the control group (mean age 9 years and 8 months), and 18 children with mild to moderate symmetrical bilateral sensorineural hearing loss were assigned to the study group (mean age 9 years and 2 months). The children underwent tympanometry, pure tone and speech audiometry and long-latency auditory evoked potential testing with speech and tone burst stimuli. The groups were assessed at three time points. The study group had a lower percentage of positive responses, lower P1-N1 and P2-N2 amplitudes (speech and tone burst), and increased latencies for the P1 and P300 components following the tone burst stimuli. They also showed improvements in long-latency auditory evoked potentials (with regard to both the amplitude and presence of responses) after hearing aid use. Alterations in the central auditory pathways can be identified using P1-N1 and P2-N2 amplitude components, and the presence of these components increases after a short period of auditory stimulation (hearing aid use). These findings emphasize the importance of using these amplitude components to monitor the neuroplasticity of the central auditory nervous system in hearing aid users.

  4. Monitoring auditory cortical plasticity in hearing aid users with long latency auditory evoked potentials: a longitudinal study

    PubMed Central

    Leite, Renata Aparecida; Magliaro, Fernanda Cristina Leite; Raimundo, Jeziela Cristina; Bento, Ricardo Ferreira; Matas, Carla Gentile

    2018-01-01

    OBJECTIVE: The objective of this study was to compare long-latency auditory evoked potentials before and after hearing aid fittings in children with sensorineural hearing loss compared with age-matched children with normal hearing. METHODS: Thirty-two subjects of both genders aged 7 to 12 years participated in this study and were divided into two groups as follows: 14 children with normal hearing were assigned to the control group (mean age 9 years and 8 months), and 18 children with mild to moderate symmetrical bilateral sensorineural hearing loss were assigned to the study group (mean age 9 years and 2 months). The children underwent tympanometry, pure tone and speech audiometry and long-latency auditory evoked potential testing with speech and tone burst stimuli. The groups were assessed at three time points. RESULTS: The study group had a lower percentage of positive responses, lower P1-N1 and P2-N2 amplitudes (speech and tone burst), and increased latencies for the P1 and P300 components following the tone burst stimuli. They also showed improvements in long-latency auditory evoked potentials (with regard to both the amplitude and presence of responses) after hearing aid use. CONCLUSIONS: Alterations in the central auditory pathways can be identified using P1-N1 and P2-N2 amplitude components, and the presence of these components increases after a short period of auditory stimulation (hearing aid use). These findings emphasize the importance of using these amplitude components to monitor the neuroplasticity of the central auditory nervous system in hearing aid users. PMID:29466495

  5. Association between hearing impairment and lower levels of physical activity in older adults.

    PubMed

    Gispen, Fiona E; Chen, David S; Genther, Dane J; Lin, Frank R

    2014-08-01

    To determine whether hearing impairment, highly prevalent in older adults, is associated with activity levels. Cross-sectional. National Health and Nutritional Examination Survey (2005-06). Individuals aged 70 and older who completed audiometric testing and whose physical activity was assessed subjectively using questionnaires and objectively using body-worn accelerometers (N=706). Hearing impairment was defined according to the speech-frequency (0.5-4 kHz) pure-tone average in the better-hearing ear (normal <25.0 dB, mild 25.0-39.9 dB, moderate or greater ≥40 dB). Main outcome measures were self-reported leisure time physical activity and accelerometer-measured physical activity. Both were quantified using minutes of moderate-intensity physical activity and categorized as inactive, insufficiently active, or sufficiently active. Ordinal logistic regression analyses were conducted and adjusted for demographic and cardiovascular risk factors. Individuals with moderate or greater hearing impairment had greater odds than those with normal hearing of being in a lower category of physical activity as measured according to self-report (OR=1.59, 95% CI=1.11-2.28) and accelerometry (OR=1.70, 95% CI=0.99-2.91). Mild hearing impairment was not associated with level of physical activity. Moderate or greater hearing impairment in older adults is associated with lower levels of physical activity independent of demographic and cardiovascular risk factors. Future research is needed to investigate the basis of this association and whether hearing rehabilitative interventions could affect physical activity in older adults. © 2014, Copyright the Authors Journal compilation © 2014, The American Geriatrics Society.

  6. Factors contributing to hearing impairment in patients with cleft lip/palate in Malaysia: A prospective study of 346 ears.

    PubMed

    Cheong, Jack Pein; Soo, Siew Shuin; Manuel, Anura Michelle

    2016-09-01

    To determine the factors contributing towards hearing impairment in patients with cleft lip/palate. A prospective analysis was conducted on 173 patients (346 ears) with cleft lip and palate (CL/P) who presented to the combined cleft clinic at University Malaya Medical Centre (UMMC) over 12 months. The patients' hearing status was determined using otoacoustic emission (OAE), pure tone audiometry (PTA) and auditory brainstem response (ABR). These results were analysed against several parameters, which included age, gender, race, types of cleft pathology, impact and timing of repair surgery. The patients' age ranged from 1-26 years old. They comprised 30% with unilateral cleft lip and palate (UCLP), 28% with bilateral cleft lip and palate (BCLP), 28% with isolated cleft palate (ICP) and 14% with isolated cleft lip (ICL). Majority of the patients (68.2%) had normal otoscopic findings. Out of the 346 ears, 241 ears (70%) ears had passed the hearing tests. There was no significant relationship between patients' gender and ethnicity with their hearing status. The types of cleft pathology significantly influenced the outcome of PTA and ABR screening results (p < 0.001). There was no significant difference between the repaired and unrepaired cleft groups and the outcome of hearing tests. However, hearing improvement occurred when palatal repair was performed at the age of <1year old (OR = 2.37, CI 1.2 = 4.6, p = 0.01). Majority of the cleft patients had normal hearing (70%). Hearing threshold varied significantly between the different types of cleft pathology. Surgery conferred no significant impact on the hearing outcome unless surgery was performed at the age of <1 year old. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. The socioeconomic impact of hearing loss in U.S. adults.

    PubMed

    Emmett, Susan D; Francis, Howard W

    2015-03-01

    To evaluate the associations between hearing loss and educational attainment, income, and unemployment/underemployment in U.S. adults. National cross-sectional survey. Ambulatory examination centers. Adults aged 20 to 69 years who participated in the 1999 to 2002 cycles of the NHANES (National Health and Nutrition Examination Survey) audiometric evaluation and income questionnaire (N = 3,379). Pure-tone audiometry, with hearing loss defined by World Health Organization criteria of bilateral pure-tone average of more than 25 dB (0.5, 1, 2, 4 kHz). Low educational attainment, defined as not completing high school; low income, defined as family income less than $20,000 per year; and unemployment or underemployment, defined as not having a job or working less than 35 hours per week. Individuals with hearing loss had 3.21 times higher odds of low educational attainment (95% confidence interval [95% CI], 2.20-4.68) compared with normal-hearing individuals. Controlling for education, age, sex, and race, individuals with hearing loss had 1.58 times higher odds of low income (95% CI, 1.16-2.15) and 1.98 times higher odds of being unemployed or underemployed (95% CI, 1.38-2.85) compared with normal-hearing individuals. Hearing loss is associated with low educational attainment in U.S. adults. Even after controlling for education and important demographic factors, hearing loss is independently associated with economic hardship, including both low income and unemployment/underemployment. The societal impact of hearing loss is profound in this nationally representative study and should be further evaluated with longitudinal cohorts. Received institutional review board approval (National Center for Health Statistics Institutional Review Board Protocol no. 98-12).

  8. 75 FR 44901 - Qualified Zone Academy Bonds; Obligations of States and Political Subdivisions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-30

    ... remedial actions for QZABs. A public hearing was scheduled for July 21, 2004. The public hearing was...) Retirement from service. The retirement from service of financed property due to normal wear or obsolescence...

  9. Cochlear compression: perceptual measures and implications for normal and impaired hearing.

    PubMed

    Oxenham, Andrew J; Bacon, Sid P

    2003-10-01

    This article provides a review of recent developments in our understanding of how cochlear nonlinearity affects sound perception and how a loss of the nonlinearity associated with cochlear hearing impairment changes the way sounds are perceived. The response of the healthy mammalian basilar membrane (BM) to sound is sharply tuned, highly nonlinear, and compressive. Damage to the outer hair cells (OHCs) results in changes to all three attributes: in the case of total OHC loss, the response of the BM becomes broadly tuned and linear. Many of the differences in auditory perception and performance between normal-hearing and hearing-impaired listeners can be explained in terms of these changes in BM response. Effects that can be accounted for in this way include poorer audiometric thresholds, loudness recruitment, reduced frequency selectivity, and changes in apparent temporal processing. All these effects can influence the ability of hearing-impaired listeners to perceive speech, especially in complex acoustic backgrounds. A number of behavioral methods have been proposed to estimate cochlear nonlinearity in individual listeners. By separating the effects of cochlear nonlinearity from other aspects of hearing impairment, such methods may contribute towards identifying the different physiological mechanisms responsible for hearing loss in individual patients. This in turn may lead to more accurate diagnoses and more effective hearing-aid fitting for individual patients. A remaining challenge is to devise a behavioral measure that is sufficiently accurate and efficient to be used in a clinical setting.

  10. The relationship between distortion product otoacoustic emissions and extended high-frequency audiometry in tinnitus patients. Part 1: normally hearing patients with unilateral tinnitus.

    PubMed

    Fabijańska, Anna; Smurzyński, Jacek; Hatzopoulos, Stavros; Kochanek, Krzysztof; Bartnik, Grażyna; Raj-Koziak, Danuta; Mazzoli, Manuela; Skarżyński, Piotr H; Jędrzejczak, Wieslaw W; Szkiełkowska, Agata; Skarżyński, Henryk

    2012-12-01

    The aim of this study was to evaluate distortion product otoacoustic emissions (DPOAEs) and extended high-frequency (EHF) thresholds in a control group and in patients with normal hearing sensitivity in the conventional frequency range and reporting unilateral tinnitus. Seventy patients were enrolled in the study: 47 patients with tinnitus in the left ear (Group 1) and 23 patients with tinnitus in the right ear (Group 2). The control group included 60 otologically normal subjects with no history of pathological tinnitus. Pure-tone thresholds were measured at all standard frequencies from 0.25 to 8 kHz, and at 10, 12.5, 14, and 16 kHz. The DPOAEs were measured in the frequency range from approximately 0.5 to 9 kHz using the primary tones presented at 65/55 dB SPL. The left ears of patients in Group 1 had higher median hearing thresholds than those in the control subjects at all 4 EHFs, and lower mean DPOAE levels than those in the controls for almost all primary frequencies, but significantly lower only in the 2-kHz region. Median hearing thresholds in the right ears of patients in Group 2 were higher than those in the right ears of the control subjects in the EHF range at 12.5, 14, and 16 kHz. The mean DPOAE levels in the right ears were lower in patients from Group 2 than those in the controls for the majority of primary frequencies, but only reached statistical significance in the 8-kHz region. Hearing thresholds in tinnitus ears with normal hearing sensitivity in the conventional range were higher in the EHF region than those in non-tinnitus control subjects, implying that cochlear damage in the basal region may result in the perception of tinnitus. In general, DPOAE levels in tinnitus ears were lower than those in ears of non-tinnitus subjects, suggesting that subclinical cochlear impairment in limited areas, which can be revealed by DPOAEs but not by conventional audiometry, may exist in tinnitus ears. For patients with tinnitus, DPOAE measures combined with behavioral EHF hearing thresholds may provide additional clinical information about the status of the peripheral hearing.

  11. High-Frequency Amplification and Sound Quality in Listeners with Normal through Moderate Hearing Loss

    ERIC Educational Resources Information Center

    Ricketts, Todd A.; Dittberner, Andrew B.; Johnson, Earl E.

    2008-01-01

    Purpose: One factor that has been shown to greatly affect sound quality is audible bandwidth. Provision of gain for frequencies above 4-6 kHz has not generally been supported for groups of hearing aid wearers. The purpose of this study was to determine if preference for bandwidth extension in hearing aid processed sounds was related to the…

  12. Speech Production in 12-Month-Old Children with and without Hearing Loss

    ERIC Educational Resources Information Center

    McGowan, Richard S.; Nittrouer, Susan; Chenausky, Karen

    2008-01-01

    Purpose: The purpose of this study was to compare speech production at 12 months of age for children with hearing loss (HL) who were identified and received intervention before 6 months of age with those of children with normal hearing (NH). Method: The speech production of 10 children with NH was compared with that of 10 children with HL whose…

  13. Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology.

    PubMed

    Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve

    The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.

  14. 77 FR 28741 - The Housing and Economic Recovery Act of 2008 (HERA): Changes to the Section 8 Tenant-Based...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-15

    ... (this is not a toll-free number). Individuals with speech or hearing impairments may access this number... listed telephone number is not a toll-free number. Persons with hearing or speech impairments may access... identify and consider regulatory approaches that reduce burdens and maintain flexibility and freedom of...

  15. Disintegration of porous polyethylene prostheses.

    PubMed

    Kerr, A G; Riley, D N

    1999-06-01

    A Plastipore (porous polyethylene) Total Ossicular Replacement Prosthesis gave an excellent initial hearing result which was maintained for 14 years. Hearing then began to deteriorate and revision surgery showed disintegration of the prosthesis and a defect in the stapes footplate. Histological examination confirmed previous findings in porous polyethylene with multinucleated foreign body giant cells and breakdown of the material.

  16. Speech Recognition and Parent Ratings From Auditory Development Questionnaires in Children Who Are Hard of Hearing.

    PubMed

    McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.

  17. Effects of Hearing Loss and Cognitive Load on Speech Recognition with Competing Talkers.

    PubMed

    Meister, Hartmut; Schreitmüller, Stefan; Ortmann, Magdalene; Rählmann, Sebastian; Walger, Martin

    2016-01-01

    Everyday communication frequently comprises situations with more than one talker speaking at a time. These situations are challenging since they pose high attentional and memory demands placing cognitive load on the listener. Hearing impairment additionally exacerbates communication problems under these circumstances. We examined the effects of hearing loss and attention tasks on speech recognition with competing talkers in older adults with and without hearing impairment. We hypothesized that hearing loss would affect word identification, talker separation and word recall and that the difficulties experienced by the hearing impaired listeners would be especially pronounced in a task with high attentional and memory demands. Two listener groups closely matched for their age and neuropsychological profile but differing in hearing acuity were examined regarding their speech recognition with competing talkers in two different tasks. One task required repeating back words from one target talker (1TT) while ignoring the competing talker whereas the other required repeating back words from both talkers (2TT). The competing talkers differed with respect to their voice characteristics. Moreover, sentences either with low or high context were used in order to consider linguistic properties. Compared to their normal hearing peers, listeners with hearing loss revealed limited speech recognition in both tasks. Their difficulties were especially pronounced in the more demanding 2TT task. In order to shed light on the underlying mechanisms, different error sources, namely having misunderstood, confused, or omitted words were investigated. Misunderstanding and omitting words were more frequently observed in the hearing impaired than in the normal hearing listeners. In line with common speech perception models, it is suggested that these effects are related to impaired object formation and taxed working memory capacity (WMC). In a post-hoc analysis, the listeners were further separated with respect to their WMC. It appeared that higher capacity could be used in the sense of a compensatory mechanism with respect to the adverse effects of hearing loss, especially with low context speech.

  18. The Effect of Tinnitus on Listening Effort in Normal-Hearing Young Adults: A Preliminary Study.

    PubMed

    Degeest, Sofie; Keppler, Hannah; Corthals, Paul

    2017-04-14

    The objective of this study was to investigate the effect of chronic tinnitus on listening effort. Thirteen normal-hearing young adults with chronic tinnitus were matched with a control group for age, gender, hearing thresholds, and educational level. A dual-task paradigm was used to evaluate listening effort in different listening conditions. A primary speech-recognition task and a secondary memory task were performed both separately and simultaneously. Furthermore, subjective listening effort was questioned for various listening situations. The Tinnitus Handicap Inventory was used to control for tinnitus handicap. Listening effort significantly increased in the tinnitus group across listening conditions. There was no significant difference in listening effort between listening conditions, nor was there an interaction between groups and listening conditions. Subjective listening effort did not significantly differ between both groups. This study is a first exploration of listening effort in normal-hearing participants with chronic tinnitus showing that listening effort is increased as compared with a control group. There is a need to further investigate the cognitive functions important for speech understanding and their possible relation with the presence of tinnitus and listening effort.

  19. [Aberrant topological properties of whole-brain functional network in chronic right-sided sensorineural hearing loss: a resting-state functional MRI study].

    PubMed

    Zhang, Lingling; Liu, Bin; Xu, Yangwen; Yang, Ming; Feng, Yuan; Huang, Yaqing; Huan, Zhichun; Hou, Zhaorui

    2015-02-03

    To investigate the topological properties of the functional brain network in unilateral sensorineural hearing loss patients. In this study, we acquired resting-state BOLD- fMRI data from 19 right-sided SNHL patients and 31 healthy controls with normal hearing and constructed their whole brain functional networks. Two-sample two-tailed t-tests were performed to investigate group differences in topological parameters between the USNHL patients and the controls. Partial correlation analysis was conducted to determine the relationships between the network metrics and USNHL-related variables. Both USNHL patients and controls exhibited small-word architecture in their brain functional networks within the range 0. 1 - 0. 2 of sparsity. Compared to the controls, USNHL patients showed significant increase in characteristic path length and normalized characteristic path length, but significant decrease in global efficiency. Clustering coefficient, local efficiency and normalized clustering coefficient demonstrated no significant difference. Furthermore, USNHL patients exhibited no significant association between the altered network metrics and the duration of USNHL or the severity of hearing loss. Our results indicated the altered topological properties of whole brain functional networks in USNHL patients, which may help us to understand pathophysiologic mechanism of USNHL patients.

  20. The effect of cochlear implantation in development of intelligence quotient of 6-9 deaf children in comparison with normal hearing children (Iran, 2009-2011).

    PubMed

    Hashemi, Seyed Basir; Monshizadeh, Leila

    2012-06-01

    Before the introduction of cochlear implant (CI) in 1980, hearing aids were the only means by which profoundly deaf children had access to auditory stimuli. Nowadays, CI is firmly established as effective option in speech and language rehabilitation of deaf children, but much of the literature regarding outcomes for children after CI are focused on development of speech and less is known about language acquisition. So, the main aim of this study is the evaluation of verbal intelligence quotient (IQ) of cochlear implanted children in comparison with normal children. 30 cochlear implanted and 30 normal hearing children with similar socio-economic level at the same age were compared by a revised version (in Persian) of WISC test (Wechsler, 1991). Then the data were analyzed through SPSS software 16. In spite of the fact that cochlear implanted children did well in different parameters of WISC test, the average scores of this group was less than normal hearing children. But in similarities (one of the parameters of WISC test) 2 group's performance was approximately the same. CI plays an important role in development of verbal IQ and language acquisition of deaf children. Different researches indicate that most of the cochlear implanted children show less language delay during the time. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. [Improvement in Phoneme Discrimination in Noise in Normal Hearing Adults].

    PubMed

    Schumann, A; Garea Garcia, L; Hoppe, U

    2017-02-01

    Objective: The study's aim was to examine the possibility to train phoneme-discrimination in noise with normal hearing adults, and its effectivity on speech recognition in noise. A specific computerised training program was used, consisting of special nonsense-syllables with background noise, to train participants' discrimination ability. Material and Methods: 46 normal hearing subjects took part in this study, 28 as training group participants, 18 as control group participants. Only the training group subjects were asked to train over a period of 3 weeks, twice a week for an hour with a computer-based training program. Speech recognition in noise were measured pre- to posttraining for the training group subjects with the Freiburger Einsilber Test. The control group subjects obtained test and restest measures within a 2-3 week break. For the training group follow-up speech recognition was measured 2-3 months after the end of the training. Results: The majority of training group subjects improved their phoneme discrimination significantly. Besides, their speech recognition in noise improved significantly during the training compared to the control group, and remained stable for a period of time. Conclusions: Phonem-Discrimination in noise can be trained by normal hearing adults. The improvements have got a positiv effect on speech recognition in noise, also for a longer period of time. © Georg Thieme Verlag KG Stuttgart · New York.

  2. The role of spectral and temporal cues in voice gender discrimination by normal-hearing listeners and cochlear implant users.

    PubMed

    Fu, Qian-Jie; Chinchilla, Sherol; Galvin, John J

    2004-09-01

    The present study investigated the relative importance of temporal and spectral cues in voice gender discrimination and vowel recognition by normal-hearing subjects listening to an acoustic simulation of cochlear implant speech processing and by cochlear implant users. In the simulation, the number of speech processing channels ranged from 4 to 32, thereby varying the spectral resolution; the cutoff frequencies of the channels' envelope filters ranged from 20 to 320 Hz, thereby manipulating the available temporal cues. For normal-hearing subjects, results showed that both voice gender discrimination and vowel recognition scores improved as the number of spectral channels was increased. When only 4 spectral channels were available, voice gender discrimination significantly improved as the envelope filter cutoff frequency was increased from 20 to 320 Hz. For all spectral conditions, increasing the amount of temporal information had no significant effect on vowel recognition. Both voice gender discrimination and vowel recognition scores were highly variable among implant users. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to comparable speech processing (4-8 spectral channels). The results suggest that both spectral and temporal cues contribute to voice gender discrimination and that temporal cues are especially important for cochlear implant users to identify the voice gender when there is reduced spectral resolution.

  3. Speech perception benefits of internet versus conventional telephony for hearing-impaired individuals.

    PubMed

    Mantokoudis, Georgios; Dubach, Patrick; Pfiffner, Flurin; Kompis, Martin; Caversaccio, Marco; Senn, Pascal

    2012-07-16

    Telephone communication is a challenge for many hearing-impaired individuals. One important technical reason for this difficulty is the restricted frequency range (0.3-3.4 kHz) of conventional landline telephones. Internet telephony (voice over Internet protocol [VoIP]) is transmitted with a larger frequency range (0.1-8 kHz) and therefore includes more frequencies relevant to speech perception. According to a recently published, laboratory-based study, the theoretical advantage of ideal VoIP conditions over conventional telephone quality has translated into improved speech perception by hearing-impaired individuals. However, the speech perception benefits of nonideal VoIP network conditions, which may occur in daily life, have not been explored. VoIP use cannot be recommended to hearing-impaired individuals before its potential under more realistic conditions has been examined. To compare realistic VoIP network conditions, under which digital data packets may be lost, with ideal conventional telephone quality with respect to their impact on speech perception by hearing-impaired individuals. We assessed speech perception using standardized test material presented under simulated VoIP conditions with increasing digital data packet loss (from 0% to 20%) and compared with simulated ideal conventional telephone quality. We monaurally tested 10 adult users of cochlear implants, 10 adult users of hearing aids, and 10 normal-hearing adults in the free sound field, both in quiet and with background noise. Across all participant groups, mean speech perception scores using VoIP with 0%, 5%, and 10% packet loss were 15.2% (range 0%-53%), 10.6% (4%-46%), and 8.8% (7%-33%) higher, respectively, than with ideal conventional telephone quality. Speech perception did not differ between VoIP with 20% packet loss and conventional telephone quality. The maximum benefits were observed under ideal VoIP conditions without packet loss and were 36% (P = .001) for cochlear implant users, 18% (P = .002) for hearing aid users, and 53% (P = .001) for normal-hearing adults. With a packet loss of 10%, the maximum benefits were 30% (P = .002) for cochlear implant users, 6% (P = .38) for hearing aid users, and 33% (P = .002) for normal-hearing adults. VoIP offers a speech perception benefit over conventional telephone quality, even when mild or moderate packet loss scenarios are created in the laboratory. VoIP, therefore, has the potential to significantly improve telecommunication abilities for the large community of hearing-impaired individuals.

  4. Speech Perception Benefits of Internet Versus Conventional Telephony for Hearing-Impaired Individuals

    PubMed Central

    Dubach, Patrick; Pfiffner, Flurin; Kompis, Martin; Caversaccio, Marco

    2012-01-01

    Background Telephone communication is a challenge for many hearing-impaired individuals. One important technical reason for this difficulty is the restricted frequency range (0.3–3.4 kHz) of conventional landline telephones. Internet telephony (voice over Internet protocol [VoIP]) is transmitted with a larger frequency range (0.1–8 kHz) and therefore includes more frequencies relevant to speech perception. According to a recently published, laboratory-based study, the theoretical advantage of ideal VoIP conditions over conventional telephone quality has translated into improved speech perception by hearing-impaired individuals. However, the speech perception benefits of nonideal VoIP network conditions, which may occur in daily life, have not been explored. VoIP use cannot be recommended to hearing-impaired individuals before its potential under more realistic conditions has been examined. Objective To compare realistic VoIP network conditions, under which digital data packets may be lost, with ideal conventional telephone quality with respect to their impact on speech perception by hearing-impaired individuals. Methods We assessed speech perception using standardized test material presented under simulated VoIP conditions with increasing digital data packet loss (from 0% to 20%) and compared with simulated ideal conventional telephone quality. We monaurally tested 10 adult users of cochlear implants, 10 adult users of hearing aids, and 10 normal-hearing adults in the free sound field, both in quiet and with background noise. Results Across all participant groups, mean speech perception scores using VoIP with 0%, 5%, and 10% packet loss were 15.2% (range 0%–53%), 10.6% (4%–46%), and 8.8% (7%–33%) higher, respectively, than with ideal conventional telephone quality. Speech perception did not differ between VoIP with 20% packet loss and conventional telephone quality. The maximum benefits were observed under ideal VoIP conditions without packet loss and were 36% (P = .001) for cochlear implant users, 18% (P = .002) for hearing aid users, and 53% (P = .001) for normal-hearing adults. With a packet loss of 10%, the maximum benefits were 30% (P = .002) for cochlear implant users, 6% (P = .38) for hearing aid users, and 33% (P = .002) for normal-hearing adults. Conclusions VoIP offers a speech perception benefit over conventional telephone quality, even when mild or moderate packet loss scenarios are created in the laboratory. VoIP, therefore, has the potential to significantly improve telecommunication abilities for the large community of hearing-impaired individuals. PMID:22805169

  5. Could driving safety be compromised by noise exposure at work and noise-induced hearing loss?

    PubMed

    Picard, Michel; Girard, Serge André; Courteau, Marilène; Leroux, Tony; Larocque, Richard; Turcotte, Fernand; Lavoie, Michel; Simard, Marc

    2008-10-01

    A study was conducted to verify if there is an association between occupational noise exposure, noise-induced hearing loss and driving safety expanding on previous findings by Picard, et al. (2008) that the two factors did increase accident risk in the workplace. This study was made possible when driving records of all Quebec drivers were made available by the Societe de l'assurance automobile du Quebec (SAAQ is the state monopoly responsible for the provision of motor vehicle insurance and the compensation of victims of traffic accidents). These records were linked with personal records maintained by the Quebec National Institute of Public Health as part of its mission to prevent noise induced hearing loss in the workplace. Individualized information on occupational noise exposure and hearing sensitivity was available for 46,030 male workers employed in noisy industries who also held a valid driver's permit. The observation period is of five years duration, starting with the most recent audiometric examination. The associations between occupational noise exposure levels, hearing status, and personal driving record were examined by log-binomial regression on data adjusted for age and duration of exposure. Daily noise exposures and bilateral average hearing threshold levels at 3, 4, and 6 kHz were used as independent variables while the dependent variables were 1) the number of motor vehicle accidents experienced by participants during the study period and 2) participants' records of registered traffic violations of the highway safety code. The findings are reported as prevalence ratios (PRs) with their 95% confidence intervals (CIs). Attributable numbers of events were computed with the relevant PRs, lesser-noise, exposed workers and those with normal hearing levels making the group of reference. Adjusting for age confirmed that experienced workers had fewer traffic accidents. The data show that occupational noise exposure and hearing loss have the same effect on driving safety record than that reported on the risk of accident in noisy industrial settings. Specifically, the risk of traffic accident (PR = 1.07 (CI 95% [1.01; 1.15]) is significantly associated with the daily occupational noise exposures >or= 100 dBA. For participants having a bilateral average hearing loss ranging from 16 to 30 dB, the PR of traffic accident is 1.06 (CI 95% [1.01; 1.11]) and reaches 1.31 (CI 95% [1.2; 1.42]) when the hearing loss exceeds of 50 dB. A reduction in the number of speeding violations occurred among workers occupationally exposed to noise levels >or= 90 dBA and those with noise-induced hearing loss >or=16 dB. By contrast, the same individuals had an increase in other violations of the Highway safety code. This suggests that noise-exposed workers might be less vigilant to other traffic hazards. Daily occupational noise exposures >or= 100 dBA and noise-induced hearing losses-even when just barely noticeable-may interfere with the safe operation of motor vehicles.

  6. Clinical assessment of pitch perception.

    PubMed

    Vaerenberg, Bart; Pascu, Alexandru; Del Bo, Luca; Schauwers, Karen; De Ceulaer, Geert; Daemers, Kristin; Coene, Martine; Govaerts, Paul J

    2011-07-01

    The perception of pitch has recently gained attention. At present, clinical audiologic tests to assess this are hardly available. This article reports on the development of a clinical test using harmonic intonation (HI) and disharmonic intonation (DI). Prospective collection of normative data and pilot study in hearing-impaired subjects. Tertiary referral center. Normative data were collected from 90 normal-hearing subjects recruited from 3 different language backgrounds. The pilot study was conducted on 18 hearing-impaired individuals who were selected into 3 pathologic groups: high-frequency hearing loss (HF), low-frequency hearing loss (LF), and cochlear implant users (CI). Normative data collection and exploratory diagnostics by means of the newly constructed HI/DI tests using intonation patterns to find the just noticeable difference (JND) for pitch discrimination in low-frequency harmonic complex sounds presented in a same-different task. JND for pitch discrimination using HI/DI tests in the hearing population and pathologic groups. Normative data are presented in 5 parameter statistics and box-and-whisker plots showing median JNDs of 2 (HI) and 3 Hz (DI). The results on both tests are statistically abnormal in LF and CI subjects, whereas they are not significantly abnormal in the HF group. The HI and DI tests allow the clinical assessment of low-frequency pitch perception. The data obtained in this study define the normal zone for both tests. Preliminary results indicate possible abnormal TFS perception in some hearing-impaired subjects.

  7. Noise-induced hearing loss alters the temporal dynamics of auditory-nerve responses

    PubMed Central

    Scheidt, Ryan E.; Kale, Sushrut; Heinz, Michael G.

    2010-01-01

    Auditory-nerve fibers demonstrate dynamic response properties in that they adapt to rapid changes in sound level, both at the onset and offset of a sound. These dynamic response properties affect temporal coding of stimulus modulations that are perceptually relevant for many sounds such as speech and music. Temporal dynamics have been well characterized in auditory-nerve fibers from normal-hearing animals, but little is known about the effects of sensorineural hearing loss on these dynamics. This study examined the effects of noise-induced hearing loss on the temporal dynamics in auditory-nerve fiber responses from anesthetized chinchillas. Post-stimulus time histograms were computed from responses to 50-ms tones presented at characteristic frequency and 30 dB above fiber threshold. Several response metrics related to temporal dynamics were computed from post-stimulus-time histograms and were compared between normal-hearing and noise-exposed animals. Results indicate that noise-exposed auditory-nerve fibers show significantly reduced response latency, increased onset response and percent adaptation, faster adaptation after onset, and slower recovery after offset. The decrease in response latency only occurred in noise-exposed fibers with significantly reduced frequency selectivity. These changes in temporal dynamics have important implications for temporal envelope coding in hearing-impaired ears, as well as for the design of dynamic compression algorithms for hearing aids. PMID:20696230

  8. On The (Un)importance of Working Memory in Speech-in-Noise Processing for Listeners with Normal Hearing Thresholds.

    PubMed

    Füllgrabe, Christian; Rosen, Stuart

    2016-01-01

    With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in the processing of speech in noise (SiN). The psychological construct that has received much interest in recent years is working memory. Empirical evidence indeed confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. However, some theoretical models propose that variations in WMC are an important predictor for variations in speech processing abilities in adverse perceptual conditions for all listeners, and this notion has become widely accepted within the field. To assess whether WMC also plays a role when listeners without hearing loss process speech in adverse listening conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification, using sentence material routinely used in audiological and hearing research. A meta-analysis revealed that, for young listeners with audiometrically normal hearing, individual variations in WMC are estimated to account for, on average, less than 2% of the variance in SiN identification scores. This result cautions against the (intuitively appealing) assumption that individual variations in WMC are predictive of SiN identification independently of the age and hearing status of the listener.

  9. Effects of Noise on Speech Recognition and Listening Effort in Children With Normal Hearing and Children With Mild Bilateral or Unilateral Hearing Loss.

    PubMed

    Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin

    2016-10-01

    This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL, UHL, or NH (Experiment 2) performed consonant identification and word and sentence recognition in background noise. Percentage correct performance and verbal response time (VRT) were assessed (onset time, total duration). In general, speech recognition improved as signal-to-noise ratio (SNR) increased both for children with NH and children with MBHL or UHL. The groups did not differ on measures of VRT. Onset times were longer for incorrect than for correct responses. For correct responses only, there was a general increase in VRT with decreasing SNR. Findings indicate poorer sentence recognition in children with NH and MBHL or UHL as SNR decreases. VRT results suggest that greater effort was expended when processing stimuli that were incorrectly identified. Increasing VRT with decreasing SNR for correct responses also supports greater effort in poorer acoustic conditions. The absence of significant hearing status differences suggests that VRT was not differentially affected by MBHL, UHL, or NH for children in this study.

  10. Acoustic properties of naturally produced clear speech at normal speaking rates

    NASA Astrophysics Data System (ADS)

    Krause, Jean C.; Braida, Louis D.

    2004-01-01

    Sentences spoken ``clearly'' are significantly more intelligible than those spoken ``conversationally'' for hearing-impaired listeners in a variety of backgrounds [Picheny et al., J. Speech Hear. Res. 28, 96-103 (1985); Uchanski et al., ibid. 39, 494-509 (1996); Payton et al., J. Acoust. Soc. Am. 95, 1581-1592 (1994)]. While producing clear speech, however, talkers often reduce their speaking rate significantly [Picheny et al., J. Speech Hear. Res. 29, 434-446 (1986); Uchanski et al., ibid. 39, 494-509 (1996)]. Yet speaking slowly is not solely responsible for the intelligibility benefit of clear speech (over conversational speech), since a recent study [Krause and Braida, J. Acoust. Soc. Am. 112, 2165-2172 (2002)] showed that talkers can produce clear speech at normal rates with training. This finding suggests that clear speech has inherent acoustic properties, independent of rate, that contribute to improved intelligibility. Identifying these acoustic properties could lead to improved signal processing schemes for hearing aids. To gain insight into these acoustical properties, conversational and clear speech produced at normal speaking rates were analyzed at three levels of detail (global, phonological, and phonetic). Although results suggest that talkers may have employed different strategies to achieve clear speech at normal rates, two global-level properties were identified that appear likely to be linked to the improvements in intelligibility provided by clear/normal speech: increased energy in the 1000-3000-Hz range of long-term spectra and increased modulation depth of low frequency modulations of the intensity envelope. Other phonological and phonetic differences associated with clear/normal speech include changes in (1) frequency of stop burst releases, (2) VOT of word-initial voiceless stop consonants, and (3) short-term vowel spectra.

  11. Identifying hearing loss by means of iridology.

    PubMed

    Stearn, Natalie; Swanepoel, De Wet

    2006-11-13

    Isolated reports of hearing loss presenting as markings on the iris exist, but to date the effectiveness of iridology to identify hearing loss has not been investigated. This study therefore aimed to determine the efficacy of iridological analysis in the identification of moderate to profound sensorineural hearing loss in adolescents. A controlled trial was conducted with an iridologist, blind to the actual hearing status of participants, analyzing the irises of participants with and without hearing loss. Fifty hearing impaired and fifty normal hearing subjects, between the ages of 15 and 19 years, controlled for gender, participated in the study. An experienced iridologist analyzed the randomised set of participants' irises. A 70% correct identification of hearing status was obtained by iridological analyses with a false negative rate of 41% compared to a 19% false positive rate. The respective sensitivity and specificity rates therefore came to 59% and 81%. Iridological analysis of hearing status indicated a statistically significant relationship to actual hearing status (P < 0.05). Although statistically significant sensitivity and specificity rates for identifying hearing loss by iridology were not comparable to those of traditional audiological screening procedures.

  12. 7 CFR 2.24 - Assistant Secretary for Administration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... (iii) Maintain overall responsibility and control over the Hearing Clerk's activities which include the custody of and responsibility for the control, maintenance, and servicing of the original and permanent..., maintaining, and disposing of real and personal property, including control of space assignments; (B...

  13. Speech perception in noise in unilateral hearing loss.

    PubMed

    Mondelli, Maria Fernanda Capoani Garcia; Dos Santos, Marina de Marchi; José, Maria Renata

    2016-01-01

    Unilateral hearing loss is characterized by a decrease of hearing in one ear only. In the presence of ambient noise, individuals with unilateral hearing loss are faced with greater difficulties understanding speech than normal listeners. To evaluate the speech perception of individuals with unilateral hearing loss in speech perception with and without competitive noise, before and after the hearing aid fitting process. The study included 30 adults of both genders diagnosed with moderate or severe sensorineural unilateral hearing loss using the Hearing In Noise Test - Hearing In Noise Test-Brazil, in the following scenarios: silence, frontal noise, noise to the right, and noise to the left, before and after the hearing aid fitting process. The study participants had a mean age of 41.9 years and most of them presented right unilateral hearing loss. In all cases evaluated with Hearing In Noise Test, a better performance in speech perception was observed with the use of hearing aids. Using the Hearing In Noise Test-Brazil test evaluation, individuals with unilateral hearing loss demonstrated better performance in speech perception when using hearing aids, both in silence and in situations with a competing noise, with use of hearing aids. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  14. Improving Adult Education for the 21st Century. Hearing Before the Subcommittee on 21st Century Competitiveness of the Committee on Education and the Workforce, House of Representatives, One Hundred Eighth Congress, First Session. Hearing held in Washington, DC, March 4, 2003.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Education and the Workforce.

    A hearing was held to discuss the prior four years of implementation of the Adult Education and Family Literacy Act, and to recommend further improvements. The opening statements of Chairman Howard McKeon and Dale E. Kildee introduce the meeting and discuss the importance of promoting an educated populace that will maintain the United States'…

  15. Case report: Unilateral conduction hearing loss due to central venous occlusion.

    PubMed

    Ribeiro, Phillip; Patel, Swetal; Qazi, Rizwan A

    2016-05-07

    Central venous stenosis is a well-known complication in patients with vascular access for hemodialysis. We report two cases involving patients on hemodialysis with arteriovenous fistulas who developed reversible unilateral conductive hearing loss secondary to critical stenosis of central veins draining the arteriovenous dialysis access. A proposed mechanism for the patients' reversible unilateral hearing loss is pterygoid venous plexus congestion leading to decreased Eustachian tube patency. Endovascular therapy was conducted to treat the stenosis and the hearing loss of both patients was returned to near normal after successful central venous angioplasty.

  16. Extension of effective date for temporary pilot program setting the time and place for a hearing before an administrative law judge. Final rule.

    PubMed

    2013-07-29

    : We are extending our pilot program that authorizes the agency to set the time and place for a hearing before an administrative law judge (ALJ). This final rule will extend the pilot program for 1 year. The extension of the pilot program continues our commitment to improve the efficiency of our hearing process and maintain a hearing process that results in accurate, high-quality decisions for claimants. The current pilot program will expire on August 9, 2013. In this final rule, we are extending the effective date to August 9, 2014. We are making no other substantive changes.

  17. Age-Related Benefits of Digital Noise Reduction for Short-Term Word Learning in Children with Hearing Loss

    ERIC Educational Resources Information Center

    Pittman, Andrea

    2011-01-01

    Purpose: To determine the rate of word learning for children with hearing loss (HL) in quiet and in noise compared to normal-hearing (NH) peers. The effects of digital noise reduction (DNR) were examined for children with HL. Method: Forty-one children with NH and 26 children with HL were grouped by age (8-9 years and 11-12 years). The children…

  18. Effects of Exposure to Inclusion and Socioeconomic Status on Parental Attitudes towards the Inclusion of Deaf and Hard of Hearing Children

    ERIC Educational Resources Information Center

    Most, Tova; Ingber, Sara

    2016-01-01

    The purpose of the study was to investigate the attitudes of parents of normal hearing (NH) children towards the inclusion of deaf and hard of hearing (DHH) children in the educational setting of their child. In particular, it examined the effect of parental socio economic status (SES) and exposure to inclusion (whether their child was in a class…

  19. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    PubMed

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  20. Examination of the neighborhood activation theory in normal and hearing-impaired listeners.

    PubMed

    Dirks, D D; Takayanagi, S; Moshfegh, A; Noffsinger, P D; Fausti, S A

    2001-02-01

    Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as well as to an elderly group of listeners with sensorineural hearing loss in the speech-shaped noise (Experiment 3). The results of three experiments verified predictions of NAM in both normal hearing and hearing-impaired listeners. In each experiment, words from low density neighborhoods were recognized more accurately than those from high density neighborhoods. The presence of high frequency neighbors (average neighborhood frequency) produced poorer recognition performance than comparable conditions with low frequency neighbors. Word frequency was found to have a highly significant effect on word recognition. Lexical conditions with high word frequencies produced higher performance scores than conditions with low frequency words. The results supported the basic tenets of NAM theory and identified both neighborhood structural properties and word frequency as significant lexical factors affecting word recognition when listening in noise and "in quiet." The results of the third experiment permit extension of NAM theory to individuals with sensorineural hearing loss. Future development of speech recognition tests should allow for the effects of higher level cognitive (lexical) factors on lower level phonemic processing.

  1. The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and Evaluation.

    PubMed

    Singh, Gurjit; Liskovoi, Lisa; Launer, Stefan; Russo, Frank

    2018-06-11

    The objectives of this research were to develop and evaluate a self-report questionnaire (the Emotional Communication in Hearing Questionnaire or EMO-CHeQ) designed to assess experiences of hearing and handicap when listening to signals that contain vocal emotion information. Study 1 involved internet-based administration of a 42-item version of the EMO-CHeQ to 586 adult participants (243 with self-reported normal hearing [NH], 193 with self-reported hearing impairment but no reported use of hearing aids [HI], and 150 with self-reported hearing impairment and use of hearing aids [HA]). To better understand the factor structure of the EMO-CHeQ and eliminate redundant items, an exploratory factor analysis was conducted. Study 2 involved laboratory-based administration of a 16-item version of the EMO-CHeQ to 32 adult participants (12 normal hearing/near normal hearing (NH/nNH), 10 HI, and 10 HA). In addition, participants completed an emotion-identification task under audio and audiovisual conditions. In study 1, the exploratory factor analysis yielded an interpretable solution with four factors emerging that explained a total of 66.3% of the variance in performance the EMO-CHeQ. Item deletion resulted in construction of the 16-item EMO-CHeQ. In study 1, both the HI and HA group reported greater vocal emotion communication handicap on the EMO-CHeQ than on the NH group, but differences in handicap were not observed between the HI and HA group. In study 2, the same pattern of reported handicap was observed in individuals with audiometrically verified hearing as was found in study 1. On the emotion-identification task, no group differences in performance were observed in the audiovisual condition, but group differences were observed in the audio alone condition. Although the HI and HA group exhibited similar emotion-identification performance, both groups performed worse than the NH/nNH group, thus suggesting the presence of behavioral deficits that parallel self-reported vocal emotion communication handicap. The EMO-CHeQ was significantly and strongly (r = -0.64) correlated with performance on the emotion-identification task for listeners with hearing impairment. The results from both studies suggest that the EMO-CHeQ appears to be a reliable and ecologically valid measure to rapidly assess experiences of hearing and handicap when listening to signals that contain vocal emotion information.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  2. 46 CFR 221.87 - Records.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 8 2011-10-01 2011-10-01 false Records. 221.87 Section 221.87 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION REGULATIONS AFFECTING MARITIME CARRIERS AND RELATED ACTIVITIES... Records. (a) A verbatim transcript of a hearing will not normally be prepared. The Hearing Officer will...

  3. 46 CFR 221.87 - Records.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 8 2014-10-01 2014-10-01 false Records. 221.87 Section 221.87 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION REGULATIONS AFFECTING MARITIME CARRIERS AND RELATED ACTIVITIES... Records. (a) A verbatim transcript of a hearing will not normally be prepared. The Hearing Officer will...

  4. 46 CFR 221.87 - Records.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 8 2013-10-01 2013-10-01 false Records. 221.87 Section 221.87 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION REGULATIONS AFFECTING MARITIME CARRIERS AND RELATED ACTIVITIES... Records. (a) A verbatim transcript of a hearing will not normally be prepared. The Hearing Officer will...

  5. 46 CFR 221.87 - Records.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 8 2012-10-01 2012-10-01 false Records. 221.87 Section 221.87 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION REGULATIONS AFFECTING MARITIME CARRIERS AND RELATED ACTIVITIES... Records. (a) A verbatim transcript of a hearing will not normally be prepared. The Hearing Officer will...

  6. Effects of Multisensory Speech Training and Visual Phonics on Speech Production of a Hearing-Impaired Child.

    ERIC Educational Resources Information Center

    Zaccagnini, Cindy M.; Antia, Shirin D.

    1993-01-01

    This study of the effects of intensive multisensory speech training on the speech production of a profoundly hearing-impaired child (age nine) found that the addition of Visual Phonics hand cues did not result in speech production gains. All six target phonemes were generalized to new words and maintained after the intervention was discontinued.…

  7. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram.

    PubMed

    Hossain, Mohammad E; Jassim, Wissam A; Zilany, Muhammad S A

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants.

  8. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram

    PubMed Central

    Hossain, Mohammad E.; Jassim, Wissam A.; Zilany, Muhammad S. A.

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants. PMID:26967160

  9. Bilateral hearing loss is associated with decreased nonverbal intelligence in US children aged 6 to 16 years.

    PubMed

    Emmett, Susan D; Francis, Howard W

    2014-09-01

    To evaluate the association between hearing loss and nonverbal intelligence in US children. The Third National Health and Nutrition Examination Survey (NHANES III) is a cross-sectional survey (1988-1994) that used complex multistage sampling design to produce nationally representative demographic and examination data. A total of 4,823 children ages 6 to 16 years completed audiometric evaluation and cognitive testing during NHANES III. Hearing loss was defined as low-frequency pure-tone average (PTA) >25 dB (0.5, 1, 2 kHz) or high-frequency PTA >25 dB (3, 4, 6, 8 kHz) and was designated as unilateral or bilateral. Nonverbal intelligence was measured using the Wechsler Intelligence Scale for Children-Revised block design subtest. Low nonverbal intelligence was defined as a standardized score <4, two standard deviations below the standardized mean of 10. Mean nonverbal intelligence scores differed between children with normal hearing (9.59) and children with bilateral (6.87; P = .02) but not unilateral (9.12; P = .42) hearing loss. Non-Hispanic black race/ethnicity and family income <$20,000 were associated with 3.92 and 1.67 times higher odds of low nonverbal intelligence, respectively (odds ratio [OR]: 3.92; P < .001; OR: 1.67; P = .02). Bilateral hearing loss was independently associated with 5.77 times increased odds of low nonverbal intelligence compared to normal hearing children (OR: 5.77; P = .02). Unilateral hearing loss was not associated with higher odds of low nonverbal intelligence (OR: 0.73; P = .40). Bilateral but not unilateral hearing loss is associated with decreased nonverbal intelligence in US children. Longitudinal studies are urgently needed to better understand these associations and their potential impact on future opportunities. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  10. Binaural Hearing Ability With Bilateral Bone Conduction Stimulation in Subjects With Normal Hearing: Implications for Bone Conduction Hearing Aids.

    PubMed

    Zeitooni, Mehrnaz; Mäki-Torkko, Elina; Stenfelt, Stefan

    The purpose of this study is to evaluate binaural hearing ability in adults with normal hearing when bone conduction (BC) stimulation is bilaterally applied at the bone conduction hearing aid (BCHA) implant position as well as at the audiometric position on the mastoid. The results with BC stimulation are compared with bilateral air conduction (AC) stimulation through earphones. Binaural hearing ability is investigated with tests of spatial release from masking and binaural intelligibility level difference using sentence material, binaural masking level difference with tonal chirp stimulation, and precedence effect using noise stimulus. In all tests, results with bilateral BC stimulation at the BCHA position illustrate an ability to extract binaural cues similar to BC stimulation at the mastoid position. The binaural benefit is overall greater with AC stimulation than BC stimulation at both positions. The binaural benefit for BC stimulation at the mastoid and BCHA position is approximately half in terms of decibels compared with AC stimulation in the speech based tests (spatial release from masking and binaural intelligibility level difference). For binaural masking level difference, the binaural benefit for the two BC positions with chirp signal phase inversion is approximately twice the benefit with inverted phase of the noise. The precedence effect results with BC stimulation at the mastoid and BCHA position are similar for low frequency noise stimulation but differ with high-frequency noise stimulation. The results confirm that binaural hearing processing with bilateral BC stimulation at the mastoid position is also present at the BCHA implant position. This indicates the ability for binaural hearing in patients with good cochlear function when using bilateral BCHAs.

  11. Bilateral Hearing Loss is Associated with Decreased Nonverbal Intelligence in US Children Ages 6 to 16 Years

    PubMed Central

    Emmett, Susan D.; Francis, Howard W.

    2017-01-01

    Objectives To evaluate the association between hearing loss and nonverbal intelligence in US children. Study Design The Third National Health and Nutrition Examination Survey (NHANES III) is a cross-sectional survey (1988–1994) that used complex multistage sampling design to produce nationally representative demographic and examination data. Methods A total of 4823 children ages 6–16 years completed audiometric evaluation and cognitive testing during NHANES III. Hearing loss was defined as low frequency pure tone average (PTA)>25 decibels (dB) (0.5,1,2 kHz) or high frequency PTA>25dB (3,4,6,8 kHz) and was designated as unilateral or bilateral. Nonverbal intelligence was measured using the Wechsler Intelligence Scale for Children-Revised block design subtest. Low nonverbal intelligence was defined as a standardized score <4, two standard deviations below the standardized mean of 10. Results Mean nonverbal intelligence scores differed between children with normal hearing (9.59) and children with bilateral (6.87; p=0.02) but not unilateral (9.12; p=0.42) hearing loss. Non-Hispanic black race/ethnicity and family income<$20,000 were associated with 3.92 and 1.67 times higher odds of low nonverbal intelligence, respectively (OR 3.92; p<0.001; OR 1.67; p=0.02). Bilateral hearing loss was independently associated with 5.77 times increased odds of low nonverbal intelligence compared to normal hearing children (OR 5.77; p=0.02). Unilateral hearing loss was not associated with higher odds of low nonverbal intelligence (OR 0.73; p=0.40). Conclusion Bilateral but not unilateral hearing loss is associated with decreased nonverbal intelligence in US children. Longitudinal studies are urgently needed to better understand these associations and their potential impact on future opportunities. PMID:24913183

  12. Early postnatal virus inoculation into the scala media achieved extensive expression of exogenous green fluorescent protein in the inner ear and preserved auditory brainstem response thresholds.

    PubMed

    Wang, Yunfeng; Sun, Yu; Chang, Qing; Ahmad, Shoeb; Zhou, Binfei; Kim, Yeunjung; Li, Huawei; Lin, Xi

    2013-01-01

    Gene transfer into the inner ear is a promising approach for treating sensorineural hearing loss. The special electrochemical environment of the scala media raises a formidable challenge for effective gene delivery at the same time as keeping normal cochlear function intact. The present study aimed to define a suitable strategy for preserving hearing after viral inoculation directly into the scala media performed at various postnatal developmental stages. We assessed transgene expression of green fluorescent protein (GFP) mediated by various types of adeno-associated virus (AAV) and lentivirus (LV) in the mouse cochlea. Auditory brainstem responses were measured 30 days after inoculation to assess effects on hearing. Patterns of GFP expression confirmed extensive exogenous gene expression in various types of cells lining the endolymphatic space. The use of different viral vectors and promoters resulted in specific cellular GFP expression patterns. AAV2/1 with cytomegalovirus promoter apparently gave the best results for GFP expression in the supporting cells. Histological examination showed normal cochlear morphology and no hair cell loss after either AAV or LV injections. We found that hearing thresholds were not significantly changed when the injections were performed in mice younger than postnatal day 5, regardless of the type of virus tested. Viral inoculation and expression in the inner ear for the restoration of hearing must not damage cochlear function. Using normal hearing mice as a model, we have achieved this necessary step, which is required for the treatment of many types of congenital deafness that require early intervention. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Communication as an ecological system.

    PubMed

    Borg, Erik; Bergkvist, Christina; Olsson, Inga-Stina; Wikström, Carina; Borg, Birgitta

    2008-11-01

    A conceptual framework for human communication, based on traditional biological ecology, is further developed. The difference between communication at the message and behavioural levels is emphasized. Empirical data are presented from various studies, showing that degree of satisfaction with communication is correlated with how close the outcome is to the memory of function prior to hearing impairment. We found no indication that hearing-impaired subjects overestimated their previous hearing or the hearing of normal-hearing people. Satisfaction was also correlated with the outcome and degree of fulfillment of expectations. It did not correlate with improvement of function. The concept of balance was presented and tested using a semi-quantitative approach. Several projects were presented in which the framework was applied: the hearing impaired as counsellor, choosing sides in unilateral deafness, a monitoring device for the deafblind, interaction between Swedish as a second language and hearing impairment, language development in hearing impaired children. By regarding hearing as a component of a communicative system, the perspective of audiological analysis and rehabilitation is broadened.

  14. Perception of Binaural Cues Develops in Children Who Are Deaf through Bilateral Cochlear Implantation

    PubMed Central

    Gordon, Karen A.; Deighton, Michael R.; Abbasalipour, Parvaneh; Papsin, Blake C.

    2014-01-01

    There are significant challenges to restoring binaural hearing to children who have been deaf from an early age. The uncoordinated and poor temporal information available from cochlear implants distorts perception of interaural timing differences normally important for sound localization and listening in noise. Moreover, binaural development can be compromised by bilateral and unilateral auditory deprivation. Here, we studied perception of both interaural level and timing differences in 79 children/adolescents using bilateral cochlear implants and 16 peers with normal hearing. They were asked on which side of their head they heard unilaterally or bilaterally presented click- or electrical pulse- trains. Interaural level cues were identified by most participants including adolescents with long periods of unilateral cochlear implant use and little bilateral implant experience. Interaural timing cues were not detected by new bilateral adolescent users, consistent with previous evidence. Evidence of binaural timing detection was, for the first time, found in children who had much longer implant experience but it was marked by poorer than normal sensitivity and abnormally strong dependence on current level differences between implants. In addition, children with prior unilateral implant use showed a higher proportion of responses to their first implanted sides than children implanted simultaneously. These data indicate that there are functional repercussions of developing binaural hearing through bilateral cochlear implants, particularly when provided sequentially; nonetheless, children have an opportunity to use these devices to hear better in noise and gain spatial hearing. PMID:25531107

  15. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    PubMed Central

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  16. Accuracy of cochlear implant recipients in speech reception in the presence of background music.

    PubMed

    Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia

    2012-12-01

    This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.

  17. Assessment of a directional microphone array for hearing-impaired listeners.

    PubMed

    Soede, W; Bilsen, F A; Berkhout, A J

    1993-08-01

    Hearing-impaired listeners often have great difficulty understanding speech in surroundings with background noise or reverberation. Based on array techniques, two microphone prototypes (broadside and endfire) have been developed with strongly directional characteristics [Soede et al., "Development of a new directional hearing instrument based on array technology," J. Acoust. Soc. Am. 94, 785-798 (1993)]. Physical measurements show that the arrays attenuate reverberant sound by 6 dB (free-field) and can improve the signal-to-noise ratio by 7 dB in a diffuse noise field (measured with a KEMAR manikin). For the clinical assessment of these microphones an experimental setup was made in a sound-insulated listening room with one loudspeaker in front of the listener simulating the partner in a discussion and eight loudspeakers placed on the edges of a cube producing a diffuse background noise. The hearing-impaired subject wearing his own (familiar) hearing aid is placed in the center of the cube. The speech-reception threshold in noise for simple Dutch sentences was determined with a normal single omnidirectional microphone and with one of the microphone arrays. The results of monaural listening tests with hearing impaired subjects show that in comparison with an omnidirectional hearing-aid microphone the broadside and endfire microphone array gives a mean improvement of the speech reception threshold in noise of 7.0 dB (26 subjects) and 6.8 dB (27 subjects), respectively. Binaural listening with two endfire microphone arrays gives a binaural improvement which is comparable to the binaural improvement obtained by listening with two normal ears or two conventional hearing aids.

  18. Speech prosody perception in cochlear implant users with and without residual hearing.

    PubMed

    Marx, Mathieu; James, Christopher; Foxton, Jessica; Capber, Amandine; Fraysse, Bernard; Barone, Pascal; Deguine, Olivier

    2015-01-01

    The detection of fundamental frequency (F0) variations plays a prominent role in the perception of intonation. Cochlear implant (CI) users with residual hearing might have access to these F0 cues. The objective was to study if and how residual hearing facilitates speech prosody perception in CI users. The authors compared F0 difference limen (F0DL) and question/statement discrimination performance for 15 normal-hearing subjects (NHS) and two distinct groups of CI subjects, according to the presence or absence of acoustic residual hearing: one "combined group" (n = 11) with residual hearing and one CI-only group (n = 10) without any residual hearing. To assess the relative contribution of the different acoustic cues for intonation perception, the sensitivity index d' was calculated for three distinct auditory conditions: one condition with original recordings, one condition with a constant F0, and one with equalized duration and amplitude. In the original condition, combined subjects showed better question/statement discrimination than CI-only subjects, d' 2.44 (SE 0.3) and 0.91 (SE 0.25), respectively. Mean d' score of NHS was 3.3 (SE 0.06). When F0 variations were removed, the scores decreased significantly for combined subjects (d' = 0.66, SE 0.51) and NHS (d' = 0.4, SE 0.09). Duration and amplitude equalization affected the scores of CI-only subjects (mean d' = 0.34, SE 0.28) but did not influence the scores of combined subjects (d' = 2.7, SE 0.02) or NHS (d' = 3.3, SE 0.33). Mean F0DL was poorer in CI-only subjects (34%, SE 15) compared with combined subjects (8.8%, SE 1.4) and NHS (2.4%, SE 0.05). In CI subjects with residual hearing, intonation d' score was correlated with mean residual hearing level (r = -0.86, n = 11, p < 0.001) and mean F0DL (r = 0.84, n = 11, p < 0.001). Where CI subjects with residual hearing had thresholds better than 60 dB HL in the low frequencies, they displayed near-normal question/statement discrimination abilities. Normal listeners mainly relied on F0 variations which were the most effective prosodic cue. In comparison, CI subjects without any residual hearing had poorer F0 discrimination and showed a strong deficit in speech prosody perception. However, this CI-only group appeared to be able to make some use of amplitude and duration cues for statement/question discrimination.

  19. Audiometric Characteristics of Hyperacusis Patients

    PubMed Central

    Sheldrake, Jacqueline; Diehl, Peter U.; Schaette, Roland

    2015-01-01

    Hyperacusis is a frequent auditory disorder where sounds of normal volume are perceived as too loud or even painfully loud. There is a high degree of co-morbidity between hyperacusis and tinnitus, most hyperacusis patients also have tinnitus, but only about 30–40% of tinnitus patients also show symptoms of hyperacusis. In order to elucidate the mechanisms of hyperacusis, detailed measurements of loudness discomfort levels (LDLs) across the hearing range would be desirable. However, previous studies have only reported LDLs for a restricted frequency range, e.g., from 0.5 to 4 kHz or from 1 to 8 kHz. We have measured audiograms and LDLs in 381 patients with a primary complaint of hyperacusis for the full standard audiometric frequency range from 0.125 to 8 kHz. On average, patients had mild high-frequency hearing loss, but more than a third of the tested ears had normal hearing thresholds (HTs), i.e., ≤20 dB HL. LDLs were found to be significantly decreased compared to a normal-hearing reference group, with average values around 85 dB HL across the frequency range. However, receiver operating characteristic analysis showed that LDL measurements are neither sensitive nor specific enough to serve as a single test for hyperacusis. There was a moderate positive correlation between HTs and LDLs (r = 0.36), i.e., LDLs tended to be higher at frequencies where hearing loss was present, suggesting that hyperacusis is unlikely to be caused by HT increase, in contrast to tinnitus for which hearing loss is a main trigger. Moreover, our finding that LDLs are decreased across the full range of audiometric frequencies, regardless of the pattern or degree of hearing loss, indicates that hyperacusis might be due to a generalized increase in auditory gain. Tinnitus on the other hand is thought to be caused by neuroplastic changes in a restricted frequency range, suggesting that tinnitus and hyperacusis might not share a common mechanism. PMID:26029161

  20. Individual differences in selective attention predict speech identification at a cocktail party

    PubMed Central

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-01-01

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise. DOI: http://dx.doi.org/10.7554/eLife.16747.001 PMID:27580272

  1. Comparisons of auditory brainstem response and sound level tolerance in tinnitus ears and non-tinnitus ears in unilateral tinnitus patients with normal audiograms.

    PubMed

    Shim, Hyun Joon; An, Yong-Hwi; Kim, Dong Hyun; Yoon, Ji Eun; Yoon, Ji Hyang

    2017-01-01

    Recently, "hidden hearing loss" with cochlear synaptopathy has been suggested as a potential pathophysiology of tinnitus in individuals with a normal hearing threshold. Several studies have demonstrated that subjects with tinnitus and normal audiograms show significantly reduced auditory brainstem response (ABR) wave I amplitudes compared with control subjects, but normal wave V amplitudes, suggesting increased central auditory gain. We aimed to reconfirm the "hidden hearing loss" theory through a within-subject comparison of wave I and wave V amplitudes and uncomfortable loudness level (UCL), which might be decreased with increased central gain, in tinnitus ears (TEs) and non-tinnitus ears (NTEs). Human subjects included 43 unilateral tinnitus patients (19 males, 24 females) with normal and symmetric hearing thresholds and 18 control subjects with normal audiograms. The amplitudes of wave I and V from the peak to the following trough were measured twice at 90 dB nHL and we separately assessed UCLs at 500 Hz and 3000 Hz pure tones in each TE and NTE. The within-subject comparison between TEs and NTEs showed no significant differences in wave I and wave V amplitude, or wave V/I ratio in both the male and female groups. Individual data revealed increased V/I amplitude ratios > mean + 2 SD in 3 TEs, but not in any control ears. We found no significant differences in UCL at 500 Hz or 3000 Hz between the TEs and NTEs, but the UCLs of both TEs and NTEs were lower than those of the control ears. Our ABR data do not represent meaningful evidence supporting the hypothesis of cochlear synaptopathy with increased central gain in tinnitus subjects with normal audiograms. However, reduced sound level tolerance in both TEs and NTEs might reflect increased central gain consequent on hidden synaptopathy that was subsequently balanced between the ears by lateral olivocochlear efferents.

  2. Comparisons of auditory brainstem response and sound level tolerance in tinnitus ears and non-tinnitus ears in unilateral tinnitus patients with normal audiograms

    PubMed Central

    An, Yong-Hwi; Kim, Dong Hyun; Yoon, Ji Eun; Yoon, Ji Hyang

    2017-01-01

    Objective Recently, “hidden hearing loss” with cochlear synaptopathy has been suggested as a potential pathophysiology of tinnitus in individuals with a normal hearing threshold. Several studies have demonstrated that subjects with tinnitus and normal audiograms show significantly reduced auditory brainstem response (ABR) wave I amplitudes compared with control subjects, but normal wave V amplitudes, suggesting increased central auditory gain. We aimed to reconfirm the “hidden hearing loss” theory through a within-subject comparison of wave I and wave V amplitudes and uncomfortable loudness level (UCL), which might be decreased with increased central gain, in tinnitus ears (TEs) and non-tinnitus ears (NTEs). Subjects and methods Human subjects included 43 unilateral tinnitus patients (19 males, 24 females) with normal and symmetric hearing thresholds and 18 control subjects with normal audiograms. The amplitudes of wave I and V from the peak to the following trough were measured twice at 90 dB nHL and we separately assessed UCLs at 500 Hz and 3000 Hz pure tones in each TE and NTE. Results The within-subject comparison between TEs and NTEs showed no significant differences in wave I and wave V amplitude, or wave V/I ratio in both the male and female groups. Individual data revealed increased V/I amplitude ratios > mean + 2 SD in 3 TEs, but not in any control ears. We found no significant differences in UCL at 500 Hz or 3000 Hz between the TEs and NTEs, but the UCLs of both TEs and NTEs were lower than those of the control ears. Conclusions Our ABR data do not represent meaningful evidence supporting the hypothesis of cochlear synaptopathy with increased central gain in tinnitus subjects with normal audiograms. However, reduced sound level tolerance in both TEs and NTEs might reflect increased central gain consequent on hidden synaptopathy that was subsequently balanced between the ears by lateral olivocochlear efferents. PMID:29253030

  3. Connections between Vision, Hearing, and Cognitive Function in Old Age.

    ERIC Educational Resources Information Center

    Wahl, Hans-Werner; Heyl, Vera

    2003-01-01

    Discusses findings of studies that examined the relationship between vision, hearing, and cognitive function in normally aging adults. Indicates that most found at least modest significant relationships between sensory and cognitive measures based on diverse assessment and design methods. (Contains 42 references.) (JOW)

  4. Predicting behavior problems in deaf and hearing children: The influences of language, attention, and parent–child communication

    PubMed Central

    Barker, David H.; Quittner, Alexandra L.; Fink, Nancy E.; Eisenberg, Laurie S.; Tobey, Emily A.; Niparko, John K.

    2009-01-01

    The development of language and communication may play an important role in the emergence of behavioral problems in young children, but they are rarely included in predictive models of behavioral development. In this study, cross-sectional relationships between language, attention, and behavior problems were examined using parent report, videotaped observations, and performance measures in a sample of 116 severely and profoundly deaf and 69 normally hearing children ages 1.5 to 5 years. Secondary analyses were performed on data collected as part of the Childhood Development After Cochlear Implantation Study, funded by the National Institutes of Health. Hearing-impaired children showed more language, attention, and behavioral difficulties, and spent less time communicating with their parents than normally hearing children. Structural equation modeling indicated there were significant relationships between language, attention, and child behavior problems. Language was associated with behavior problems both directly and indirectly through effects on attention. Amount of parent–child communication was not related to behavior problems. PMID:19338689

  5. Assessment of central auditory processing in a group of workers exposed to solvents.

    PubMed

    Fuente, Adrian; McPherson, Bradley; Muñoz, Verónica; Pablo Espina, Juan

    2006-12-01

    Despite having normal hearing thresholds and speech recognition thresholds, results for central auditory tests were abnormal in a group of workers exposed to solvents. Workers exposed to solvents may have difficulties in everyday listening situations that are not related to a decrement in hearing thresholds. A central auditory processing disorder may underlie these difficulties. To study central auditory processing abilities in a group of workers occupationally exposed to a mix of organic solvents. Ten workers exposed to a mix of organic solvents and 10 matched non-exposed workers were studied. The test battery comprised pure-tone audiometry, tympanometry, acoustic reflex measurement, acoustic reflex decay, dichotic digit, pitch pattern sequence, masking level difference, filtered speech, random gap detection and hearing-in-noise tests. All the workers presented normal hearing thresholds and no signs of middle ear abnormalities. Workers exposed to solvents had lower results in comparison with the control group and previously reported normative data, in the majority of the tests.

  6. Predicting behavior problems in deaf and hearing children: the influences of language, attention, and parent-child communication.

    PubMed

    Barker, David H; Quittner, Alexandra L; Fink, Nancy E; Eisenberg, Laurie S; Tobey, Emily A; Niparko, John K

    2009-01-01

    The development of language and communication may play an important role in the emergence of behavioral problems in young children, but they are rarely included in predictive models of behavioral development. In this study, cross-sectional relationships between language, attention, and behavior problems were examined using parent report, videotaped observations, and performance measures in a sample of 116 severely and profoundly deaf and 69 normally hearing children ages 1.5 to 5 years. Secondary analyses were performed on data collected as part of the Childhood Development After Cochlear Implantation Study, funded by the National Institutes of Health. Hearing-impaired children showed more language, attention, and behavioral difficulties, and spent less time communicating with their parents than normally hearing children. Structural equation modeling indicated there were significant relationships between language, attention, and child behavior problems. Language was associated with behavior problems both directly and indirectly through effects on attention. Amount of parent-child communication was not related to behavior problems.

  7. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    PubMed Central

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160

  8. Early language development in children with profound hearing loss fitted with a device at a young age: part II--content of the first lexicon.

    PubMed

    Nott, Pauline; Cowan, Robert; Brown, P Margaret; Wigglesworth, Gillian

    2009-10-01

    Lexical content is commonly understood to refer to the various categories of words that children produce and has been studied extensively in children with normal hearing. Unlike the hearing child, however, little is known about the word categories that make up the first lexicon of children with hearing loss (HL). Knowledge of the first lexicon is increasingly important, as infants with HL are now being detected through universal newborn hearing screening programs and fitted with hearing aids and cochlear implants in before 12 months of age. For these children, emergence of the first spoken words is a major milestone eagerly awaited by parents and one of the first verbal language goals of teachers and therapists working with such children. The purpose of this study was to evaluate the lexical content of the first 50 and 100 words produced by children with HL and to contrast this with that of a group of hearing children. Lexical content was compared in two groups of children: one group composed of 24 participants with severe profound or profound HL and a second group composed of 16 participants with normal hearing. Twenty-three participants in the HL group were fitted with a cochlear implant and one with bilateral hearing aids. All were "switched on" or fitted before 30 months of age. The Diary of Early Language (Di-EL) was used to collect a 100-word lexicon from each participant. All single word and frozen phrase data from each child's Di-EL were allocated to 1 of 15 word types grouped into four word categories (noun, predicate, grammatical, and paralexical), and the results were compared for both groups. The hearing and HL groups showed similar distributions of word categories, with nouns constituting the largest portion of the lexicon followed by predicates and paralexicals. Grammaticals made up the smallest portion of the lexicon. However, several significant differences were evident between the two groups. In both the 50- and 100-word lexicons, the hearing group used proportionately more nouns, fewer predicates, more common nouns, and fewer onomatopoeic words compared with the HL group. Further, more participants in the hearing group used grammatical word types other than adverbs (including pronouns) compared with the HL group. Overall, lexical content of the HL group was similar to that of the hearing group for both the 50- and 100-word lexicons, although some differences in proportional use were noted across word categories and types. It is suggested that differences in the quantity and diversity of language experienced by children with normal hearing compared with those with HL, together with differences in the input they receive, might in part explain these differences. The effect of quality of speech input and therapy method on the emerging lexicon and subsequent language development will be particularly important in informing appropriate intervention strategies for children with HL.

  9. The hearing-impaired child in the hearing society.

    PubMed

    Burton, M H

    1983-11-01

    This paper sets out to describe a method of educating the hearing-impaired which has been operating successfully for the past 18 years. The underlying tenet of our approach is that considerable communicative skills can be developed with children who have marked hearing loss. Even if the child is profoundly deaf he or she has some sensory input which can be used as the basis for training in language development. The attempt to make the most of the minimal hearing of the hearing-impaired child has proved to be successful in the vast majority of cases. The profoundly hearing-impaired child can learn to listen and to produce the spoken word. This is demonstrated by use of video-tape. The interaction of teacher with child is heard and the regional accent can be identified. The prosodic features of the speech are retained although articulation may be incomplete. Intelligibility of utterance is shown to be a combination of rhythm stress and intonation based on previously heard patterns rather than on perfectly articulated sounds. The social consequence of this approach is that child is not relegated to a minority subculture where only the deaf can communicate with the deaf but is allowed to enter into the world of normal relationships and expectations. Deaf children can be taught to listen and to use imperfectly heard patterns in order to interpret the meaning of language. This input of speech follows the natural language normally used by the child who is not deaf.

  10. False Belief Development in Children Who Are Hard of Hearing Compared With Peers With Normal Hearing

    PubMed Central

    Ambrose, Sophie E.; Oleson, Jacob; Moeller, Mary Pat

    2017-01-01

    Purpose This study investigates false belief (FB) understanding in children who are hard of hearing (CHH) compared with children with normal hearing (CNH) at ages 5 and 6 years and at 2nd grade. Research with this population has theoretical significance, given that the early auditory–linguistic experiences of CHH are less restricted compared with children who are deaf but not as complete as those of CNH. Method Participants included CHH and CNH who had completed FB tasks as part of a larger multicenter, longitudinal study on outcomes of children with mild-to-severe hearing loss. Both cross-sectional and longitudinal data were analyzed. Results At age 5 years, CHH demonstrated significant delays in FB understanding relative to CNH. Both hearing status and spoken-language abilities contributed to FB performance in 5-year-olds. A subgroup of CHH showed protracted delays at 6 years, suggesting that some CHH are at risk for longer term delays in FB understanding. By 2nd grade, performance on 1st- and 2nd-order FBs did not differ between CHH and CNH. Conclusions Preschool-age CHH are at risk for delays in understanding others' beliefs, which has consequences for their social interactions and pragmatic communication. Research related to FB in children with hearing loss has the potential to inform our understanding of mechanisms that support social–cognitive development, including the roles of language and conversational access. PMID:29209697

  11. The Role of Sentence Position, Allomorph, and Morpheme Type on Accurate Use of s-Related Morphemes by Children Who Are Hard of Hearing

    PubMed Central

    Koehlinger, Keegan; Oleson, Jacob; McCreery, Ryan; Moeller, Mary Pat

    2015-01-01

    Purpose Production accuracy of s-related morphemes was examined in 3-year-olds with mild-to-severe hearing loss, focusing on perceptibility, articulation, and input frequency. Method Morphemes with /s/, /z/, and /ɪz/ as allomorphs (plural, possessive, third-person singular –s, and auxiliary and copula “is”) were analyzed from language samples gathered from 51 children (ages: 2;10 [years;months] to 3;8) who are hard of hearing (HH), all of whom used amplification. Articulation was assessed via the Goldman-Fristoe Test of Articulation–Second Edition, and monomorphemic word final /s/ and /z/ production. Hearing was measured via better ear pure tone average, unaided Speech Intelligibility Index, and aided sensation level of speech at 4 kHz. Results Unlike results reported for children with normal hearing, the group of children who are HH correctly produced the /ɪz/ allomorph more than /s/ and /z/ allomorphs. Relative accuracy levels for morphemes and sentence positions paralleled those of children with normal hearing. The 4-kHz sensation level scores (but not the better ear pure tone average or Speech Intelligibility Index), the Goldman-Fristoe Test of Articulation–Second Edition, and word final s/z use all predicted accuracy. Conclusions Both better hearing and higher articulation scores are associated with improved morpheme production, and better aided audibility in the high frequencies and word final production of s/z are particularly critical for morpheme acquisition in children who are HH. PMID:25650750

  12. Brainstem auditory evoked potential wave V latency-intensity function in normal Dalmatian and Beagle puppies.

    PubMed

    Poncelet, L; Coppens, A; Deltenre, P

    2000-01-01

    This study investigated whether Dalmatian puppies with normal hearing bilaterally had the same click-evoked brainstem auditory potential characteristics as age-matched dogs of another breed. Short-latency brainstem auditory potentials evoked by condensation and rarefaction clicks were recorded in 23 1.5- to 2-month-old Dalmatian puppies with normal hearing bilaterally by a qualitative brainstem auditory evoked potential test and in 16 Beagle dogs of the same age. For each stimulus intensity, from 90 dB normal hearing level down to the wave V threshold, the sum of the potentials evoked by the 2 kinds of stimuli were added, giving an equivalent to the alternate click polarity stimulation. The slope of the L segment of the wave V latency-intensity curve was steeper in Dalmatian (-40 +/- 10 micros/dB) than in Beagles (-28 +/- 5 micros/dB, P < .001) puppies. The hearing threshold was lower in the Beagle puppies (P < .05). These results suggest that interbreed differences may exist at the level of cochlear function in this age class. The wave V latency and wave V-wave I latencies differences at high stimulus intensity were different between the groups of puppies (4.3 +/- 0.2 and 2.5 +/- 0.2 milliseconds, respectively, for Beagles; and 4.1 +/- 0.2 and 2.3 +/- 0.2 milliseconds for Dalmatians, P < .05). A different maturation speed of the neural pathways is one possible explanation of this observation.

  13. Evaluation of Mandarin Chinese Speech Recognition in Adults with Cochlear Implants Using the Spectral Ripple Discrimination Test

    PubMed Central

    Dai, Chuanfu; Zhao, Zeqi; Zhang, Duo; Lei, Guanxiong

    2018-01-01

    Background The aim of this study was to explore the value of the spectral ripple discrimination test in speech recognition evaluation among a deaf (post-lingual) Mandarin-speaking population in China following cochlear implantation. Material/Methods The study included 23 Mandarin-speaking adult subjects with normal hearing (normal-hearing group) and 17 deaf adults who were former Mandarin-speakers, with cochlear implants (cochlear implantation group). The normal-hearing subjects were divided into men (n=10) and women (n=13). The spectral ripple discrimination thresholds between the groups were compared. The correlation between spectral ripple discrimination thresholds and Mandarin speech recognition rates in the cochlear implantation group were studied. Results Spectral ripple discrimination thresholds did not correlate with age (r=−0.19; p=0.22), and there was no significant difference in spectral ripple discrimination thresholds between the male and female groups (p=0.654). Spectral ripple discrimination thresholds of deaf adults with cochlear implants were significantly correlated with monosyllabic recognition rates (r=0.84; p=0.000). Conclusions In a Mandarin Chinese speaking population, spectral ripple discrimination thresholds of normal-hearing individuals were unaffected by both gender and age. Spectral ripple discrimination thresholds were correlated with Mandarin monosyllabic recognition rates of Mandarin-speaking in post-lingual deaf adults with cochlear implants. The spectral ripple discrimination test is a promising method for speech recognition evaluation in adults following cochlear implantation in China. PMID:29806954

  14. Evaluation of Mandarin Chinese Speech Recognition in Adults with Cochlear Implants Using the Spectral Ripple Discrimination Test.

    PubMed

    Dai, Chuanfu; Zhao, Zeqi; Shen, Weidong; Zhang, Duo; Lei, Guanxiong; Qiao, Yuehua; Yang, Shiming

    2018-05-28

    BACKGROUND The aim of this study was to explore the value of the spectral ripple discrimination test in speech recognition evaluation among a deaf (post-lingual) Mandarin-speaking population in China following cochlear implantation. MATERIAL AND METHODS The study included 23 Mandarin-speaking adult subjects with normal hearing (normal-hearing group) and 17 deaf adults who were former Mandarin-speakers, with cochlear implants (cochlear implantation group). The normal-hearing subjects were divided into men (n=10) and women (n=13). The spectral ripple discrimination thresholds between the groups were compared. The correlation between spectral ripple discrimination thresholds and Mandarin speech recognition rates in the cochlear implantation group were studied. RESULTS Spectral ripple discrimination thresholds did not correlate with age (r=-0.19; p=0.22), and there was no significant difference in spectral ripple discrimination thresholds between the male and female groups (p=0.654). Spectral ripple discrimination thresholds of deaf adults with cochlear implants were significantly correlated with monosyllabic recognition rates (r=0.84; p=0.000). CONCLUSIONS In a Mandarin Chinese speaking population, spectral ripple discrimination thresholds of normal-hearing individuals were unaffected by both gender and age. Spectral ripple discrimination thresholds were correlated with Mandarin monosyllabic recognition rates of Mandarin-speaking in post-lingual deaf adults with cochlear implants. The spectral ripple discrimination test is a promising method for speech recognition evaluation in adults following cochlear implantation in China.

  15. Children with bilateral cochlear implants identify emotion in speech and music.

    PubMed

    Volkova, Anna; Trehub, Sandra E; Schellenberg, E Glenn; Papsin, Blake C; Gordon, Karen A

    2013-03-01

    This study examined the ability of prelingually deaf children with bilateral implants to identify emotion (i.e. happiness or sadness) in speech and music. Participants in Experiment 1 were 14 prelingually deaf children from 5-7 years of age who had bilateral implants and 18 normally hearing children from 4-6 years of age. They judged whether linguistically neutral utterances produced by a man and woman sounded happy or sad. Participants in Experiment 2 were 14 bilateral implant users from 4-6 years of age and the same normally hearing children as in Experiment 1. They judged whether synthesized piano excerpts sounded happy or sad. Child implant users' accuracy of identifying happiness and sadness in speech was well above chance levels but significantly below the accuracy achieved by children with normal hearing. Similarly, their accuracy of identifying happiness and sadness in music was well above chance levels but significantly below that of children with normal hearing, who performed at ceiling. For the 12 implant users who participated in both experiments, performance on the speech task correlated significantly with performance on the music task and implant experience was correlated with performance on both tasks. Child implant users' accurate identification of emotion in speech exceeded performance in previous studies, which may be attributable to fewer response alternatives and the use of child-directed speech. Moreover, child implant users' successful identification of emotion in music indicates that the relevant cues are accessible at a relatively young age.

  16. No Association Between Time of Onset of Hearing Loss (Childhood Versus Adulthood) and Self-Reported Hearing Handicap in Adults.

    PubMed

    Aarhus, Lisa; Tambs, Kristian; Engdahl, Bo

    2015-12-01

    This study examined the association between time of onset of hearing loss (childhood vs. adulthood) and self-reported hearing handicap in adults. This is a population-based cohort study of 2,024 adults (mean = 48 years) with hearing loss (binaural pure-tone average 0.5-4 kHz ≥ 20 dB HL) who completed a hearing handicap questionnaire. In childhood, the same persons (N = 2,024) underwent audiometry in a school investigation (at ages 7, 10, and 13 years), in which 129 were diagnosed with sensorineural hearing loss (binaural pure-tone average 0.5-4 kHz ≥ 20 dB HL), whereas 1,895 had normal hearing thresholds. Hearing handicap was measured in adulthood as the sum-score of various speech perception and social impairment items (15 items). The sum-score increased with adult hearing threshold level (p < .001). After adjustment for adult hearing threshold level, hearing aid use, adult age, sex, and socioeconomic status, there was no significant difference in hearing handicap sum-score between the group with childhood-onset hearing loss (n = 129) and the group with adult-onset hearing loss (n = 1,895; p = .882). Self-reported hearing handicap in adults increased with hearing threshold level. After adjustment for adult hearing threshold level, this cohort study revealed no significant association between time of onset of hearing loss (childhood vs. adulthood) and self-reported hearing handicap.

  17. No Association Between Time of Onset of Hearing Loss (Childhood Versus Adulthood) and Self-Reported Hearing Handicap in Adults

    PubMed Central

    Tambs, Kristian; Engdahl, Bo

    2015-01-01

    Purpose This study examined the association between time of onset of hearing loss (childhood vs. adulthood) and self-reported hearing handicap in adults. Methods This is a population-based cohort study of 2,024 adults (mean = 48 years) with hearing loss (binaural pure-tone average 0.5–4 kHz ≥ 20 dB HL) who completed a hearing handicap questionnaire. In childhood, the same persons (N = 2,024) underwent audiometry in a school investigation (at ages 7, 10, and 13 years), in which 129 were diagnosed with sensorineural hearing loss (binaural pure-tone average 0.5–4 kHz ≥ 20 dB HL), whereas 1,895 had normal hearing thresholds. Results Hearing handicap was measured in adulthood as the sum-score of various speech perception and social impairment items (15 items). The sum-score increased with adult hearing threshold level (p < .001). After adjustment for adult hearing threshold level, hearing aid use, adult age, sex, and socioeconomic status, there was no significant difference in hearing handicap sum-score between the group with childhood-onset hearing loss (n = 129) and the group with adult-onset hearing loss (n = 1,895; p = .882). Conclusion Self-reported hearing handicap in adults increased with hearing threshold level. After adjustment for adult hearing threshold level, this cohort study revealed no significant association between time of onset of hearing loss (childhood vs. adulthood) and self-reported hearing handicap. PMID:26649831

  18. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners.

    PubMed

    Schwartz, Andrew H; Shinn-Cunningham, Barbara G

    2013-04-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.

  19. Perception of dissonance by people with normal hearing and sensorineural hearing loss

    NASA Astrophysics Data System (ADS)

    Tufts, Jennifer B.; Molis, Michelle R.; Leek, Marjorie R.

    2005-08-01

    The purpose of this study was to determine whether the perceived sensory dissonance of pairs of pure tones (PT dyads) or pairs of harmonic complex tones (HC dyads) is altered due to sensorineural hearing loss. Four normal-hearing (NH) and four hearing-impaired (HI) listeners judged the sensory dissonance of PT dyads geometrically centered at 500 and 2000 Hz, and of HC dyads with fundamental frequencies geometrically centered at 500 Hz. The frequency separation of the members of the dyads varied from 0 Hz to just over an octave. In addition, frequency selectivity was assessed at 500 and 2000 Hz for each listener. Maximum dissonance was perceived at frequency separations smaller than the auditory filter bandwidth for both groups of listners, but maximum dissonance for HI listeners occurred at a greater proportion of their bandwidths at 500 Hz than at 2000 Hz. Further, their auditory filter bandwidths at 500 Hz were significantly wider than those of the NH listeners. For both the PT and HC dyads, curves displaying dissonance as a function of frequency separation were more compressed for the HI listeners, possibly reflecting less contrast between their perceptions of consonance and dissonance compared with the NH listeners.

  20. Auditory stream segregation with multi-tonal complexes in hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Rogers, Deanna S.; Lentz, Jennifer J.

    2004-05-01

    The ability to segregate sounds into different streams was investigated in normally hearing and hearing-impaired listeners. Fusion and fission boundaries were measured using 6-tone complexes with tones equally spaced in log frequency. An ABA-ABA- sequence was used in which A represents a multitone complex ranging from either 250-1000 Hz (low-frequency region) or 1000-4000 Hz (high-frequency region). B also represents a multitone complex with same log spacing as A. Multitonal complexes were 100 ms in duration with 20-ms ramps, and- represents a silent interval of 100 ms. To measure the fusion boundary, the first tone of the B stimulus was either 375 Hz (low) or 1500 Hz (high) and shifted downward in frequency with each progressive ABA triplet until the listener pressed a button indicating that a ``galloping'' rhythm was heard. When measuring the fusion boundary, the first tone of the B stimulus was 252 or 1030 Hz and shifted upward with each triplet. Listeners then pressed a button when the ``galloping rhythm ended.'' Data suggest that hearing-impaired subjects have different fission and fusion boundaries than normal-hearing listeners. These data will be discussed in terms of both peripheral and central factors.

  1. Audio reproduction for personal ambient home assistance: concepts and evaluations for normal-hearing and hearing-impaired persons.

    PubMed

    Huber, Rainer; Meis, Markus; Klink, Karin; Bartsch, Christian; Bitzer, Joerg

    2014-01-01

    Within the Lower Saxony Research Network Design of Environments for Ageing (GAL), a personal activity and household assistant (PAHA), an ambient reminder system, has been developed. One of its central output modality to interact with the user is sound. The study presented here evaluated three different system technologies for sound reproduction using up to five loudspeakers, including the "phantom source" concept. Moreover, a technology for hearing loss compensation for the mostly older users of the PAHA was implemented and evaluated. Evaluation experiments with 21 normal hearing and hearing impaired test subjects were carried out. The results show that after direct comparison of the sound presentation concepts, the presentation by the single TV speaker was most preferred, whereas the phantom source concept got the highest acceptance ratings as far as the general concept is concerned. The localization accuracy of the phantom source concept was good as long as the exact listening position was known to the algorithm and speech stimuli were used. Most subjects preferred the original signals over the pre-processed, dynamic-compressed signals, although processed speech was often described as being clearer.

  2. A correlational method to concurrently measure envelope and temporal fine structure weights: effects of age, cochlear pathology, and spectral shaping.

    PubMed

    Fogerty, Daniel; Humes, Larry E

    2012-09-01

    The speech signal may be divided into spectral frequency-bands, each band containing temporal properties of the envelope and fine structure. This study measured the perceptual weights for the envelope and fine structure in each of three frequency bands for sentence materials in young normal-hearing listeners, older normal-hearing listeners, aided older hearing-impaired listeners, and spectrally matched young normal-hearing listeners. The availability of each acoustic property was independently varied through noisy signal extraction. Thus, the full speech stimulus was presented with noise used to mask six different auditory channels. Perceptual weights were determined by correlating a listener's performance with the signal-to-noise ratio of each acoustic property on a trial-by-trial basis. Results demonstrate that temporal fine structure perceptual weights remain stable across the four listener groups. However, a different weighting typography was observed across the listener groups for envelope cues. Results suggest that spectral shaping used to preserve the audibility of the speech stimulus may alter the allocation of perceptual resources. The relative perceptual weighting of envelope cues may also change with age. Concurrent testing of sentences repeated once on a previous day demonstrated that weighting strategies for all listener groups can change, suggesting an initial stabilization period or susceptibility to auditory training.

  3. Binaural release from masking with single- and multi-electrode stimulation in children with cochlear implantsa)

    PubMed Central

    Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.

    2016-01-01

    Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs. PMID:27475132

  4. Binaural release from masking with single- and multi-electrode stimulation in children with cochlear implants.

    PubMed

    Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y

    2016-07-01

    Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs.

  5. Usher syndrome type III can mimic other types of Usher syndrome.

    PubMed

    Pennings, Ronald J E; Fields, Randall R; Huygen, Patrick L M; Deutman, August F; Kimberling, William J; Cremers, Cor W R J

    2003-06-01

    Clinical and genetic characteristics are presented of 2 patients from a Dutch Usher syndrome type III family who have a new homozygous USH3 gene mutation: 149-152delCAGG + insTGTCCAAT. One individual (IV:1) is profoundly hearing impaired and has normal vestibular function and retinitis punctata albescens (RPA). The other individual is also profoundly hearing impaired, but has well-developed speech, vestibular areflexia, and retinitis pigmentosa sine pigmento (RPSP). These findings suggest that Usher syndrome type III can be clinically misdiagnosed as either Usher type I or II; that Usher syndrome patients who are profoundly hearing impaired and have normal vestibular function should be tested for USH3 mutations; and that RPA and RPSP can occur as fundoscopic manifestations of pigmentary retinopathy in Usher syndrome.

  6. Infrasonic and low-frequency insert earphone hearing threshold.

    PubMed

    Kuehler, Robert; Fedtke, Thomas; Hensel, Johannes

    2015-04-01

    Low-frequency and infrasonic pure-tone monaural hearing threshold data down to 2.5 Hz are presented. These measurements were made by means of a newly developed insert-earphone source. The source is able to generate pure-tone sound pressure levels up to 130 dB between 2 and 250 Hz with very low harmonic distortions. Behavioral hearing thresholds were determined in the frequency range from 2.5 to 125 Hz for 18 otologically normal test persons. The median hearing thresholds are comparable to values given in the literature. They are intended for stimulus calibration in subsequent brain imaging investigations.

  7. Physiopathology of the cochlear microcirculation.

    PubMed

    Shi, Xiaorui

    2011-12-01

    Normal blood supply to the cochlea is critically important for establishing the endocochlear potential and sustaining production of endolymph. Abnormal cochlear microcirculation has long been considered an etiologic factor in noise-induced hearing loss, age-related hearing loss (presbycusis), sudden hearing loss or vestibular function, and Meniere's disease. Knowledge of the mechanisms underlying the pathophysiology of cochlear microcirculation is of fundamental clinical importance. A better understanding of cochlear blood flow (CoBF) will enable more effective management of hearing disorders resulting from aberrant blood flow. This review focuses on recent discoveries and findings related to the physiopathology of the cochlear microvasculature. Published by Elsevier B.V.

  8. Physiopathology of the Cochlear Microcirculation

    PubMed Central

    Shi, Xiaorui

    2011-01-01

    Normal blood supply to the cochlea is critically important for establishing the endocochlear potential and sustaining production of endolymph. Abnormal cochlear microcirculation has long been considered an etiologic factor in noise-induced hearing loss, age-related hearing loss (presbycusis), sudden hearing loss or vestibular function, and Meniere's disease. Knowledge of the mechanisms underlying the pathophysiology of cochlear microcirculation is of fundamental clinical importance. A better understanding of cochlear blood flow (CoBF) will enable more effective management of hearing disorders resulting from aberrant blood flow. This review focuses on recent discoveries and findings related to the physiopathology of the cochlear microvasculature. PMID:21875658

  9. [Dichotic digit test. Case].

    PubMed

    Zenker Castro, Franz; Fernández Belda, Rafael; Barajas de Prat, José Juan

    2008-12-01

    In this study we present a case of a 71-year-old female patient with sensorineural hearing loss and fitted with bilateral hearing aids. The patient complained of scant benefit from the hearing aid fitting with difficulties in understanding speech with background noise. The otolaryngology examination was normal. Audiological tests revealed bilateral sensorineural hearing loss with threshold values of 51 and 50 dB HL in the right and left ear. The Dichotic Digit Test was administered in a divided attention mode and focalizing the attention to each ear. Results in this test are consistent with a Central Auditory Processing Disorder.

  10. Demographic Characteristics and Rates of Progress of Deaf and Hard of Hearing Persons Receiving Substance Abuse Treatment

    ERIC Educational Resources Information Center

    Moore, Dennis; McAweeney, Mary

    2007-01-01

    A lack of demographic information and data related to the achievement of short-term goals during substance abuse treatment among persons who are deaf or hard of hearing dictated the need for the study. New York State maintains a database on all individuals who participate in treatment. Within this database, 1.8% of persons in treatment for…

  11. Preliminary comparison of infants speech with and without hearing loss

    NASA Astrophysics Data System (ADS)

    McGowan, Richard S.; Nittrouer, Susan; Chenausky, Karen

    2005-04-01

    The speech of ten children with hearing loss and ten children without hearing loss aged 12 months is examined. All the children with hearing loss were identified before six months of age, and all have parents who wish them to become oral communicators. The data are from twenty minute sessions with the caregiver and child, with their normal prostheses in place, in semi-structured settings. These data are part of a larger test battery applied to both caregiver and child that is part of a project comparing the development of children with hearing loss to those without hearing loss, known as the Early Development of Children with Hearing Loss. The speech comparisons are in terms of number of utterances, syllable shapes, and segment type. A subset of the data was given a detailed acoustic analysis, including formant frequencies and voice quality measures. [Work supported by NIDCD R01 006237 to Susan Nittrouer.

  12. Dialogue enabling speech-to-text user assistive agent system for hearing-impaired person.

    PubMed

    Lee, Seongjae; Kang, Sunmee; Han, David K; Ko, Hanseok

    2016-06-01

    A novel approach for assisting bidirectional communication between people of normal hearing and hearing-impaired is presented. While the existing hearing-impaired assistive devices such as hearing aids and cochlear implants are vulnerable in extreme noise conditions or post-surgery side effects, the proposed concept is an alternative approach wherein spoken dialogue is achieved by means of employing a robust speech recognition technique which takes into consideration of noisy environmental factors without any attachment into human body. The proposed system is a portable device with an acoustic beamformer for directional noise reduction and capable of performing speech-to-text transcription function, which adopts a keyword spotting method. It is also equipped with an optimized user interface for hearing-impaired people, rendering intuitive and natural device usage with diverse domain contexts. The relevant experimental results confirm that the proposed interface design is feasible for realizing an effective and efficient intelligent agent for hearing-impaired.

  13. [Diagnosis of psychogenic hearing disorders in childhood].

    PubMed

    Kothe, C; Fleischer, S; Breitfuss, A; Hess, M

    2003-11-01

    In comparison with organic hearing loss, which is commonly reported, non-organic hearing loss is under-represented in the literature. The audiological results for 20 patients, aged between 6 and 17 years (mean 11.3), with psychogenic hearing disturbances were analysed prospectively. In 17 cases, the disturbance was bilateral and in three cases unilateral. In no case was the result of an objective hearing test exceptional, while a hearing threshold of between 30 and 100 dB was reported in single ear, pure-tone audiograms. In 12 cases, single ear speech audiograms were unexceptional. Suprathreshold tests, such as the dichotic discrimination test or the speech audiogram with noise disturbance, could lead to a clearer diagnosis in cases of severe psychogenic auditory impairment. In half of the patients, a conflict situation in the school or family was evident. After treatment for this conflict, hearing ability returned to normal. There was no improvement for six patients.

  14. The neural consequences of age-related hearing loss

    PubMed Central

    Peelle, Jonathan E.; Wingfield, Arthur

    2016-01-01

    During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension. PMID:27262177

  15. Breath sounds

    MedlinePlus

    The lung sounds are best heard with a stethoscope. This is called auscultation. Normal lung sounds occur ... the bottom of the rib cage. Using a stethoscope, the doctor may hear normal breathing sounds, decreased ...

  16. Biological Impact of Music and Software-Based Auditory Training

    ERIC Educational Resources Information Center

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals--both young and old--encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in…

  17. The Use of Lexical Neighborhood Test (LNT) in the Assessment of Speech Recognition Performance of Cochlear Implantees with Normal and Malformed Cochlea.

    PubMed

    Kant, Anjali R; Banik, Arun A

    2017-09-01

    The present study aims to use the model-based test Lexical Neighborhood Test (LNT), to assess speech recognition performance in early and late implanted hearing impaired children with normal and malformed cochlea. The LNT was administered to 46 children with congenital (prelingual) bilateral severe-profound sensorineural hearing loss, using Nucleus 24 cochlear implant. The children were grouped into Group 1-(early implantees with normal cochlea-EI); n = 15, 31/2-61/2 years of age; mean age at implantation-3½ years. Group 2-(late implantees with normal cochlea-LI); n = 15, 6-12 years of age; mean age at implantation-5 years. Group 3-(early implantees with malformed cochlea-EIMC); n = 9; 4.9-10.6 years of age; mean age at implantation-3.10 years. Group 4-(late implantees with malformed cochlea-LIMC); n = 7; 7-12.6 years of age; mean age at implantation-6.3 years. The following were the malformations: dysplastic cochlea, common cavity, Mondini's, incomplete partition-1 and 2 (IP-1 and 2), enlarged IAC. The children were instructed to repeat the words on hearing them. Means of the word and phoneme scores were computed. The LNT can also be used to assess speech recognition performance of hearing impaired children with malformed cochlea. When both easy and hard lists of LNT are considered, although, late implantees (with or without normal cochlea), have achieved higher word scores than early implantees, the differences are not statistically significant. Using LNT for assessing speech recognition enables a quantitative as well as descriptive report of phonological processes used by the children.

  18. Processing Mechanisms in Hearing-Impaired Listeners: Evidence from Reaction Times and Sentence Interpretation.

    PubMed

    Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther

    The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.

  19. The performance-perceptual test and its relationship to unaided reported handicap.

    PubMed

    Saunders, Gabrielle H; Forsline, Anna; Fausti, Stephen A

    2004-04-01

    Measurement of hearing aid outcomes is necessary for demonstration of treatment efficacy, third-party payment, and cost-benefit analysis. Outcomes are usually measured with hearing-related questionnaires and/or tests of speech recognition. However, results from these two types of test often conflict. In this paper, we provide data from a new test measure, known as the Performance-Perceptual Test (PPT), in which subjective and performance aspects of hearing in noise are measured using the same test materials and procedures. A Performance Speech Reception Threshold (SRTN) and a Perceptual SRTN are measured using the Hearing In Noise Test materials and adaptive procedure. A third variable, the discrepancy between these two SRTNs, is also computed. It measures the accuracy with which subjects assess their own hearing ability and is referred to as the Performance-Perceptual Discrepancy (PPDIS). One hundred seven subjects between 24 and 83 yr of age took part. Thirty-three subjects had normal hearing, while the remaining seventy-four had symmetrical sensorineural hearing loss. Of the subjects with impaired hearing, 24 wore hearing aids and 50 did not. All subjects underwent routine audiological examination and completed the PPT and the Hearing Handicap Inventory for the Elderly/Adults on two occasions, between 1 and 2 wk apart. The PPT was conducted for unaided listening with the masker level set to 50, 65, and 80 dB SPL. PPT data show that the subjects with normal hearing have significantly better Performance and Perceptual SRTNs at each test level than the subjects with impaired hearing but that PPDIS values do not differ between the groups. Test-retest reliability for the PPT is excellent (r-values > 0.93 for all conditions). Stepwise multiple regression analysis showed that the Performance SRTN, the PPDIS, and age explain 40% of the variance in reported handicap (Hearing Handicap Inventory for the Elderly/Adults scores). More specifically, poorer performance, underestimation of hearing ability and younger age result in greater reported handicap, and vice versa. Reported handicap consists of a performance component and a (mis)perception component, as measured by the Performance SRTN and the PPDIS respectively. The PPT should thus prove to be a valuable tool for better understanding why some individuals complain of hearing difficulties but have only a mild hearing loss or conversely report few difficulties in the presence of substantial impairment. The measure would thus seem to provide both an explanation and a counseling tool for patients for whom there is a mismatch between reported and measured hearing difficulties.

  20. Music students: conventional hearing thresholds and at high frequencies.

    PubMed

    Lüders, Débora; Gonçalves, Cláudia Giglio de Oliveira; Lacerda, Adriana Bender de Moreira; Ribas, Ângela; Conto, Juliana de

    2014-01-01

    Research has shown that hearing loss in musicians may cause difficulty in timbre recognition and tuning of instruments. To analyze the hearing thresholds from 250 Hz to 16,000 Hz in a group of music students and compare them to a non-musician group in order to determine whether high-frequency audiometry is a useful tool in the early detection of hearing impairment. Study design was a retrospective observational cohort. Conventional and high-frequency audiometry was performed in 42 music students (Madsen Itera II audiometer and TDH39P headphones for conventional audiometry, and HDA 200 headphones for high-frequency audiometry). Of the 42 students, 38.1% were female students and 61.9% were male students, with a mean age of 26 years. At conventional audiometry, 92.85% had hearing thresholds within normal limits; but even within the normal limits, the worst results were observed in the left ear for all frequencies, except for 4000 Hz; compared to the non-musician group, the worst results occurred at 500 Hz in the left ear, and at 250 Hz, 6000 Hz, 9000 Hz, 10,000 Hz, and 11,200 Hz in both the ears. The periodic evaluation of high-frequency thresholds may be useful in the early detection of hearing loss in musicians. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

Top